View discussions

Jump to navigation Jump to search

Project:Village pump (technical)/Archive 180

Is there a tool for this?
Cyphoidbomb (talk) 02:12, 20 March 2020
Hey all, if I think that an editor is restoring a much earlier version of an article, is there a tool that could look at Entire Article A and scan back through the edit history to find all the versions (Entire Article B-Z) that match? WikiBlame doesn't quite do the trick, since it requires a specific phrase, and typically stops as soon as it finds the first instance of that phrase. Context: Let's say I think they might be evading a block, and I want to see if they're restoring a version that they manipulated under a different account at some point in the distant past. Thanks,
John of Reading (talk) 07:28, 20 March 2020
@Cyphoidbomb: WikiBlame should be able to help with this. Set the "Start date" back a bit, to a date before the old version was re-instated, and tick the box "Look for removal of text". It should then find the last of the old revisions that contained your phrase. --
MusikAnimal (talk) 16:51, 20 March 2020
You could also run a database query for this, if you're searching for previous revisions that exactly match a given revision in its entirety. See quarry:query/43132; this one finds all revisions of my sandbox where it was a blank page (i.e. matching the content of Special:PermaLink/944834000). Just fork that query and replace the page_title, page_namespace and rev_id accordingly (see comments). Hope this helps,
Cyphoidbomb (talk) 19:35, 20 March 2020
@MusikAnimal: I think this might be exactly what I need, thank you. I gave it a whirl and ran into some issues. If I was searching an article, should the page_namespace value be zero? That seemed to work, but I wanted to double-check. Where can I find the other appropriate values for, say, Wikipedia space, Article space, draft space, etc.? Thanks!
John of Reading (talk) 19:58, 20 March 2020
@Cyphoidbomb: Those numbers are listed in the box at the top right of Wikipedia:Namespace. --
Cyphoidbomb (talk) 20:05, 20 March 2020
@John of Reading: Aha! Thank you! Also thank you for your response above. Regards,
Counting red links
Sphilbrick (talk) 21:36, 20 March 2020
I created a template for the {{Honda Sports Award}}, which has 620 entries. I exported to Excel so I could see how many names were duplicated, but I would also like to count red links. While I could do so manually, I would like to monitor the count over time, so I'd like to know if there is an easy way to count them.--
Jorm (talk) 21:53, 20 March 2020

document.querySelectorAll('a.new').length <-- gets you all of them

cheers.--

Jorm (talk) 21:56, 20 March 2020

If you want just in that template, it gets more complicated:

let count, container = document.querySelector('div[aria-labelledby="Honda_Sports_Award"]'); if (container) { count = container.querySelectorAll('a.new').length; }

But if you just run it from the template's page, the first one will work. --

Sphilbrick (talk) 01:34, 21 March 2020
@Jorm:, Thank-you, that worked like a charm.--
Help understanding this Javascript.
Tenryuu (talk) 04:32, 22 March 2020

Hi everyone. I've recently discovered this Javascript code that recognises bolded keywords and adds symbols in front of them. I made a fork out of it on my common/js by adding icons and altering keywords. Here is my understanding of it:

  • Under var vs[[:Template:=]], each icon corresponds to an integer starting from 0, with each new line being incremented by 1.
  • Under var la[[:Template:=]], each item associates a string with the icon's integer.

I'm having some success with changing and adding icons and strings, but my question is how would it prioritise which string to use? Say I have different icons that I want to use for two terms, "delete" and "speedy delete". Is there a way to suppress the delete icon when "speedy delete" is typed? --

PrimeHunter (talk) 13:19, 22 March 2020
var vs= ... actually makes a giant string where "+" is string concatention. var vt=vs.split("#"); then splits the string at each "#" to make an array with numbering from 0. The result is as you say. The script is currently based on individual words and selects an icon for each matching word. If you set an icon for 'speedy' alone then you get both the speedy icon and the delete icon when the page says "speedy delete". Is that OK? You would similarly get two icons for "speedy keep".
Tenryuu (talk) 15:12, 22 March 2020

@PrimeHunter:, would it be appropriate to essentially consider it as an OR statement? If the test string "speedy keep" exists, then

  1. the "keep" icon activates
  2. the "speedy keep" icon recognises "keep" but waits on a "speedy" somewhere in the string
  3. the "speedy keep" icon recognises speedy as well and activates

I also noticed that it doesn't appear to do exact string matching. I'm guessing the script doesn't allow for it? --

PrimeHunter (talk) 16:11, 22 March 2020
That is not how it works. A "speedy keep" icon doesn't recognize or activate anything. Icons must be associated with a single word and they activate on any bolded string containing that word. A "speedy" icon activates on a "speedy keep" comment. So does a "keep" icon. Both icons will be shown. Capitalization doesn't matter.
Tenryuu (talk) 17:57, 22 March 2020
@PrimeHunter:, I think I understand now. So there's no way to have an icon become associated with a string with more than one word then? --
PrimeHunter (talk) 18:03, 22 March 2020
Right.
Tenryuu (talk) 20:15, 22 March 2020
Alright, thanks for the help. I'll learn a bit more Javascript when the time comes then. In the meantime I consider this Template:Resolved mark. :) --
Gadget-MenuTabsToggle
Aram (talk) 16:15, 19 March 2020
Hello, This gadget (see the section name) is not working now because the images no longer available. I fixed it on our project. If you want, please copy ckb:MediaWiki:Gadget-MenuTabsToggle.css content and paste it to MediaWiki:Gadget-MenuTabsToggle.css. Thanks! ⇒
Amorymeltzer (talk) 10:12, 21 March 2020
They were just moved again, so is there really a need to rename them all from commons? I guess the svgs are MIA. But does this gadget even still work? ~ Amory
Aram (talk) 21:40, 22 March 2020
@Amorymeltzer: Can you put the new links they have? I tested the gadget on some another wikipedias, but it didn't work. The gadget work on our project after changing the files urls. ⇒
"Strike out usernames that have been blocked" option
Headbomb (talk) 00:03, 23 March 2020

If you have the "Strike out usernames that have been blocked" option enabled in the appearance section of your preferences, it strikes out... well, blocked users. HOWEVER, it doesn't strike out globally locked users.

See Category:Wikipedia sockpuppets of Northernrailwaysfan for example, where User:Mr Fenton's Helicopters and User:Q3 Academy Tipton aren't striked out.

Those should also be striken. I'm posting here because I don't know where else this should be posted.

PrimeHunter (talk) 01:50, 23 March 2020
By the way, it's not in the appearance section of preferences, it's in the gadgets section (in the appearance subsection). Sorry to be pedantic but this is critical to where the code is made and can be modified. The actual appearance section is part of MediaWiki and you would have to file a Phabricator request for a change there. We rarely mention the subsection of gadgets but just say gadgets. They are JavaScript and/or CSS made here at the English Wikipedia.
Help Needed (read, begged for): MediaWiki JS Interface
Guywan (talk) 15:15, 23 March 2020

Template:Resolved Ironically, I've been using AJAX to make synchronous API calls. I'd really like to swap over to using MediaWiki's JavaScript API interface, but I don't know how to make synchronous calls to it.

var data;
new mw.Api().get({
    "action": "query",
    "list": "backlinks",
    "bltitle": "Wikipedia:Village pump (technical)"
}).done(result => { data = result; });

console.log(data);

Of course, 'undefined' is logged to the console since get() or done() do not wait for the request to be 'completed'. Is it possible to make synchronous calls to the API via this method? I am aware that I could write }).done(result => { console.log(result); });, but let us ignore that for argument's sake. Regards,

SD0001 (talk) 18:02, 23 March 2020

@Guywan: mw.Api does not support synchronous API calls. Rightfully so, since synchronous calls block the JavaScript runtime's event loop which causes the browser to freeze until the API response is received.

However, you can use the ES6 async/await functions to make your asynchronous code look synchronous. But note that ES6+ stuff is not supported by older browsers (IE 11, etc) and are thus disallowed on site-wide javascript.

async function main() {
    var data = await new mw.Api().get({
        "action": "query",
        "list": "backlinks",
        "bltitle": "Wikipedia:Village pump (technical)"
    });
    console.log(data)
}
main();

Guywan (talk) 18:56, 23 March 2020
@SD0001: Thanks. You're help is very much appreciated. Perhaps I need to start thinking of asynchronous solutions. Conservata veritate, comrade.
Edits get lost by the Back button
Macrakis (talk) 21:20, 22 March 2020

When I start editing an article, visit another web page in the same tab, then use the Back button to return to my edits, my edits are gone -- I see the previous state of the page. This behavior is new. Until a week or so ago, I would routinely visit another page, then use Back to return to my incomplete edits. I really liked that functionality.

This appears to be a change in Wikipedia (different HTTP cache-control header, perhaps?), not the browser (I use Chrome 80.0.3987.149 on Windows 10). Here's a quick test I tried. I created the following HTML page:

<!DOCTYPE html>
<html>
  <a href="https://en.wikipedia.org/wiki/Main_Page">Go to WP</a>
  <form>
    <textarea> </textarea>
  </form>
</html>

navigated to it, entered something in the form, clicked on Go to WP, then hit Back, and the textarea content was still there. But if I do the same thing with, say, my Sandbox page, the textbox content is reset. Has something changed recently? Thanks, --

Jonesey95 (talk) 22:13, 22 March 2020
I don't know if anything has changed recently, but ctrl-click or whatever your browser's shortcut is for opening a link in a new tab is a safer alternative for maintaining the state of any web page than trusting the back button. Using the "open link in new tab" feature is a good habit to build. –
Trappist the monk (talk) 23:06, 22 March 2020

I'm also seeing this and yeah, it is damned annoying to lose edits because chrome changed or because mediawiki changed; I don't know which. On win 10 and chrome, it's Template:Key press-click for a new tab that gets the focus. Still, habits are very hard to break so there is the 'Warn me when I leave an edit page with unsaved changes' at Special:Preferences#mw-prefsection-editing.

Macrakis (talk) 20:05, 23 March 2020
@Jonesey95: Thanks, I'm familiar with "Open link in new tab/CTRL-click" and other techniques. I still find the old form behavior better. --
Remind me bot
DannyS712 (talk) 18:59, 23 March 2020

I'd like to propose that a bot be used to allow users to schedule reminders for themselves. The basic spec is:

  • User adds an entry to /reminders.json
  • User adds user page to a specific category (not created yet)
  • Each day, but retrieves the user pages in that category, transforms them to the reminder pages ("User:Example" => "User:Example/reminders.json"), retrieves the scheduled reminders, and posts any for the day
  • User is responsible for removing old reminders from their /reminders.json page, but the bot only checks to see if there are reminders for "today" (UTC at noon) and so doesn't care if there are old ones left

See Wikipedia:Bots/Requests for approval/DannyS712 bot 68 for more. --

Yair rand (talk) 20:00, 23 March 2020
Why not just make a gadget that stores things in the user's private JSON store (userjs-foo in the options API), and checks that every day? --
DannyS712 (talk) 20:01, 23 March 2020
@Yair rand: Because I don't know how to code that
SD0001 (talk) 04:17, 24 March 2020
A userscript already exists: User:SD0001/W-Ping, which does precisely that. The advantage of storing stuff using the options API is that reminders remain private. It is not viewable by anyone like userspace subpages.
Exclude Templates in Search
Macrakis (talk) 22:09, 23 March 2020
Is there some way to search article text, but exclude navbox templates? For example, if I want to search Wikipedia for rumaki, I typically don't want to find all the pages which include Template:Bacon. One trick is to find an obscure term that appears in the template and search for, e.g., [rumaki -samgyeopsal]; but that isn't right, either, because it excludes pages that mention rumaki in the text of the page as well as in the Bacon template. Thanks, --
PrimeHunter (talk) 22:25, 23 March 2020
Template:Search link searches for pages saying rumaki in the source text. Template:Search link searches pages which say rumaki in the renderd text but do not have the template. It does not find pages which both have the template and say rumaki elsewhere on the page. You may also be interested in User:PrimeHunter/Source links.js. It searches pages which link to Rumaki in the source of the page and not only via a template. It includes pages which link both in the source and a template. On Rumaki the script says Source links.
Deacon Vorbis (talk) 22:28, 23 March 2020
Template:PbTemplate:Ec I just had to do this recently, and it was a bit easier, because the page had a disambiguator which could be searched for as well since it would only appear in links and not normal text. It's not a perfect solution, but you can try a regexp by searching insource:/\[\[[Rr]umaki/, which should match anything that links to that page, but it won't catch anything that links to a redirect to it, which may or may not be a problem. If there's a better way, I'd certainly like to know. –
PrimeHunter (talk) 22:40, 23 March 2020
insource regex is expensive by itself and may time out. mw:Help:CirrusSearch#Insource says: "When possible, please avoid running a bare regexp search". If the goal was to find links (I doubt it was although a link was posted), you can search insource:rumaki insource:/\[\[[Rr]umaki/. My script does something similar.
Macrakis (talk) 13:55, 24 March 2020
Thanks, very helpful! --
Link rot: pqarchiver.com is gone.
Example 1
Example 2
A876 (talk) 08:13, 24 March 2020

Edited at Special:Diff/947089613

This link is dead:

https://pqasb.pqarchiver.com/chicagotribune/access/24827004.html?dids=24827004:24827004&FMT=ABS&FMTS=ABS:FT&date=Sep+09%2C+1988&author=Chris+Heim&pub=Chicago+Tribune+(pre-1997+Fulltext)&desc=A+HIT+FOR+OTHERS%2C+SHEAR+LOOKS+OUT+FOR+NO.+1&pqatl=google

Chicago Tribune hosts it online at: (I used this link.)

https://www.chicagotribune.com/news/ct-xpm-1988-09-09-8801290097-story.html

Here, the live copy on chicagotribune.com seems best, but it had to be searched-for.

If the URL at chicagotribune.com goes away, Internet Archive of chicagotribune.com is a fallback.

The "automatable" fix would be pqasb.pqarchiver.com (if found). An archive of an archive seems ugly, but it is better than a broken link. A bot might or might not be able to find a "live" link, depending on the original publisher. If someone wants to manually search out the "live copy" afterward, the archive of the archive can make it easier. -

Izno (talk) 14:04, 24 March 2020
@GreenC: One for you or IABot I think. --
Example 1
Example 2
A876 (talk) 08:13, 24 March 2020

Edited at Special:Diff/947089613

This link is dead:

https://pqasb.pqarchiver.com/chicagotribune/access/24827004.html?dids=24827004:24827004&FMT=ABS&FMTS=ABS:FT&date=Sep+09%2C+1988&author=Chris+Heim&pub=Chicago+Tribune+(pre-1997+Fulltext)&desc=A+HIT+FOR+OTHERS%2C+SHEAR+LOOKS+OUT+FOR+NO.+1&pqatl=google

Chicago Tribune hosts it online at: (I used this link.)

https://www.chicagotribune.com/news/ct-xpm-1988-09-09-8801290097-story.html

Here, the live copy on chicagotribune.com seems best, but it had to be searched-for.

If the URL at chicagotribune.com goes away, Internet Archive of chicagotribune.com is a fallback.

The "automatable" fix would be pqasb.pqarchiver.com (if found). An archive of an archive seems ugly, but it is better than a broken link. A bot might or might not be able to find a "live" link, depending on the original publisher. If someone wants to manually search out the "live copy" afterward, the archive of the archive can make it easier. -

Izno (talk) 14:04, 24 March 2020
@GreenC: One for you or IABot I think. --
Deepcat searches not returning all results
SD0001 (talk) 18:44, 24 March 2020

Doing a deepcat search like deepcat:"Lakhimpur Kheri district" finds only 76 results, though there ought to be more. Pages like Lakhahi (a member of Category:Villages in Lakhimpur Kheri district which is in turn a member of Category:Lakhimpur Kheri district) don't show up in the results.

mw:Help:CirrusSearch#Deepcategory doesn't mention anything that would be responsible for such behaviour. Is this a bug?

PrimeHunter (talk) 19:06, 24 March 2020
None of the 7 articles in Category:Villages in Lakhimpur Kheri district appear in the search. This appears to be an example of phab:T238686: "Deepcat search returns incomplete results".
Edit screen issue
Jo-Jo Eumerus (talk) 16:59, 22 March 2020

When I try to edit The Pleiades (volcano group), the edit screen works abnormally and cannot be edited properly. It's not the first time that this problem occurs, it seems to hit separate articles at different tims.

Jo-Jo Eumerus (talk) 17:15, 22 March 2020
Screwing around with the latest addition to my local CSS and clearing caches didn't resolve the problem FYI.
PrimeHunter (talk) 17:41, 22 March 2020
It looks like it tries to make two edit areas. Is that normal for you? I haven't seen it before. Does safemode work? Does it work to log out? Try to disable syntax highlighting on the highlighter marker button Codemirror-icon.png to the left of "Advanced". Did you clear cache with only F5, with Ctrl+F5, or clear the entire browser cache? See Wikipedia:Bypass your cache.
Jo-Jo Eumerus (talk) 17:45, 22 March 2020
Safemode works, but I don't understand how the problem occurs. Is there a problem with the code somewhere?
RoySmith (talk) 17:47, 22 March 2020
I've seen something like this before. You've got Editor / "Enable the editing toolbar" checked under Special:Preferences, yes? I think it's related to that. I used to use that, but found it didn't work reliably and turned it off. Try unchecking it and see if that helps? --
Jo-Jo Eumerus (talk) 19:22, 22 March 2020
Well, it helps at removing the citation tools, which is not much better than this bug. So that doesn't work as a solution.
RoySmith (talk) 21:37, 22 March 2020
@Jo-Jo Eumerus:, I've started using the Visual Editor most of the time, especially when I'm creating citations. The built-in citation tool works pretty well. I've also been using User:V111P/js/webRef.js sometimes. One of those might be a useful work-around for you. --
Trappist the monk (talk) 18:11, 22 March 2020

I have seen this before, though not recently. I use chrome if that matters. For me, if I recall correctly, it appears to happen when the last character added to the edit window (bottom right corner) is a space character that fills or is the first to overflow edit window. Adding that character causes the browser to enable the elevators in the scrollbars. But, it has been a while since I've seen this so it is highly possible that I am mis-remembering the details. When I did encounter these problems, inserting a crlf (enter key) restored the display. After I finished writing whatever it was that I was writing, go back and delete the crlf and bob's-yer-uncle.

The thing I'm seeing most often with this highlighter is mis-registering of the highlighting by a few characters, which, once the highlighting is offset from where it is supposed to be, carries on to the end of the wikisource.

Jo-Jo Eumerus (talk) 19:59, 22 March 2020
For what it's worth I am going with Firefox. I see from the page history that other editors were able to edit the page in the meantime.
MusikAnimal (talk) 01:46, 23 March 2020
@Jo-Jo Eumerus: I seem to recall this issue happening only for User:Remember the dot/Syntax highlighter ("Syntax highlighter" at Special:Preferences#mw-prefsection-gadgets). Do you have that enabled? In your screenshot you don't appear to have CodeMirror turned on (via the Codemirror-icon.png button), so I would try that instead of the gadget to see if it works better.
Jo-Jo Eumerus (talk) 10:13, 23 March 2020
@MusikAnimal: Partially: After turning it off there is only one screen, but it still doesn't work properly (one cannot navigate in the screen or mark things with the mouse and the buttons don't work). Of note, there is a unusual grey bar just above the edit window and below the buttons that appears whenever this problem crops up.
Jo-Jo Eumerus (talk) 09:17, 24 March 2020
A little update: It seems like not all versions of the history have the bug at the same time. Maybe it's something with Firefox?

I did manage to complete that article with RoySmith's suggstion, but it won't work for a large article or a multi-page source if the bug appears there as well.

Jo-Jo Eumerus (talk) 20:23, 24 March 2020
I wonder if this has anything to do with the one year old phab:T188607.
Excessive use of style attributes makes custom theme changes (e.g. dark theme) difficult-to-impossible
Sollyucko (talk) 19:54, 24 March 2020
Is there a reason why style="..." is used everywhere, instead of using CSS classes or IDs? (For example, on this page, there are hundreds of occurences, including color changes.) Doing so makes making large theme customizations, especially color changes such as dark-theming, for which color contrast is often an issue, extremely difficult and error-prone. — Preceding unsigned comment added by
Ahecht (talk) 21:13, 24 March 2020
Because creating per-template CSS classes wasn't possible in the software until very recently, and modifying site-wide CSS justifiably takes a huge amount of effort to gain consensus and work through the bureaucracy. --
New traffic report: Daily article pageviews from social media
Jmorgan (WMF) (talk) 18:56, 23 March 2020

The WMF Research team has published a new report of inbound traffic coming from Facebook, Twitter, YouTube, and Reddit.

The report contains a list of all articles that received at least 500 views from one or more of these sites (i.e. someone clicked a link on Twitter that sent them directly to a Wikipedia article). The report will be updated daily at around 14:00 UTC with traffic counts from the previous calendar day.

We believe this report provides editors with a valuable new information source. Daily inbound social media traffic stats can help editors monitor edits to articles that are going viral on social media sites and/or are being linked to by the social media platform itself in order to fact-check disinformation and other controversial content.

The social media traffic report also contains additional public article metadata that may be useful in the context of monitoring articles that are receiving unexpected attention from social media sites, such as...

  • the total number of pageviews (from all sources) that article received in the same period of time
  • the number of pageviews the article received from the same platform (e.g. Facebook) the previous day (two days ago)
  • The number of editors who have the page on their watchlist
  • The number of editors who have watchlisted the page AND recently visited it

We are currently actively seeking feedback on this report! We have some ideas of our own for how to improve the report, but we want to hear yours. If you have feature suggestions, questions, or other comments please add them to the project talkpage on Meta or ping Jonathan Morgan on his talkpage. Also be sure to check out our growing FAQ.

We intend to maintain this daily report for at least the next two months. If we receive feedback that the report is useful, we are considering making it available indefinitely. Cheers,

Liz (talk) 01:18, 25 March 2020
Thanks for your work on this, Jmorgan. I know I'll be checking it out.
Template help
FlightTime Phone (talk) 22:57, 24 March 2020
I'm trying to make a template here User:FlightTime/PL display, however I can not get the two items to be on the same line. Any help will be gratefully appreciated. Thanx, -
Redrose64 (talk) 23:20, 24 March 2020
You won't if you use Template:Tlx, which emits a table; and tables cannot be inline. --
PrimeHunter (talk) 00:17, 25 March 2020
Well, there is style="display:inline-table;" but something in the ombox class prevents it from working here, and we probably shouldn't try to make it work.
Jonesey95 (talk) 02:35, 25 March 2020
I took the liberty of making an edit. Revert if it was unhelpful. –
FlightTime (talk) 04:43, 25 March 2020
@Jonesey95: that is perfect! Thank you very much. Cheers, -
en.wikipedia.org wants to use your device's location???
RoySmith (talk) 23:44, 24 March 2020
I got a "en.wikipedia.org wants to use your device's location" alert the other day when browsing on my phone. Any idea why wikipedia would request my location? --
RudolfRed (talk) 00:13, 25 March 2020
@RoySmith: It is to provide you with localized content and notices. See [3] and [4]
TheDJ (talk) 08:40, 25 March 2020
@RoySmith:, the tracking indicated by RudolfRed should not ever use that location api on the desktop version of the website though... that's not normal. —
Hyphenation on mobile view
Modest Genius (talk) 16:38, 20 March 2020
Template:Tracked Has there been a change to the configuration of the mobile site? Starting in the last 24 hours, I'm seeing very over-zealous hyphenation being automatically inserted in mobile views. Taking the current Main Page as an example, 30% of the TFA lines and 50% of the ITN lines of text end with an auto-generated hyphen, but look fine on desktop. Turning my phone to landscape view roughly halves those percentages, and moves the hyphens to different places, but it's still an off-puttingly high number of hyphens that makes the text difficult to read. They're also being inserted in very odd locations within the word e.g. im-agery and mu-sic.
PrimeHunter (talk) 16:49, 20 March 2020
MediaWiki:Minerva.css has requested the browser to select places to hyphenate in Mobile since 2018.[5] I don't know whether something recent has caused it to be used more. There was also a post today at Wikipedia:Help desk#Hyphenation and line breaks (iPhone/iPad). You can disable it by adding the below to Special:Mypage/minerva.css.
PrimeHunter (talk) 16:57, 20 March 2020
* {hyphens: manual !important;}
Or is MediaWiki:Minerva.css only used by "MinervaNeue" at Special:Preferences#mw-prefsection-rendering while the mobile site uses MediaWiki:Mobile.css? Special:Mypage/minerva.css is loaded in mobile in either case so the fix works.
Modest Genius (talk) 17:22, 20 March 2020
I am using Safari on iPhone, so that's probably the same issue. I only log in on desktop, remaining logged out on mobile, so cannot make manual CSS changes there. And if there's a problem, it should probably be fixed for more than just the user who reports it...
PrimeHunter (talk) 17:28, 20 March 2020
By "fix" I meant a change for those who want it. I haven't opined on whether it should be changed for everybody.
TheDJ (talk) 10:05, 21 March 2020
@PrimeHunter:, I think Minerva.css didn't use to get loaded on the mobile site (only in the desktop version of the minerva skin). Judging from responses now, it seems this has been rectified last week (I know some skin rework was done, i'm guessing this is part of it). If I remember correctly, there were multiple people asking for hyphenation back then, and I made that change to test it out and asked people to look at the result. I believe we already concluded back than, that auto hypens was terrible, because it has no sense of measure, but apparently I never reverted it. Suggest we undo that change. —
Modest Genius (talk) 10:56, 23 March 2020
Again it's unclear if there was a deliberate change, but it's looking a lot better today. TFA has just two auto-hyphens, and ITN three (portrait mode, two and two on landscape), which is far more manageable. This level is probably what was originally intended.
TheDJ (talk) 11:14, 23 March 2020
@Modest Genius:, the breaking is completely automatic as determined by an algorithm in the browser. —
Modest Genius (talk) 11:18, 23 March 2020
Sure, but that doesn't explain why it went haywire for a few days, on Wikipedia only, affecting multiple users. There was no Safari update in that time either.
TheDJ (talk) 19:22, 23 March 2020
@Modest Genius:, my comment was about today. There is no difference in 'aggresivness' of the hyphenation between today and yesterday, because there is no such setting as 'aggressiveness' for auto hyphenation. As to what changed before that, that is already being discussed in thread. —
PrimeHunter (talk) 22:29, 23 March 2020
There is a suggestion to remove automatic hyphens at MediaWiki talk:Common.css#Auto-hyphen.
Mahmudmasri (talk) 20:26, 25 March 2020
The reason why we only noticed it lately was because our browsers couldn't make use of that feature till they were updated, then we took notice to the 2 year-old edit. The feature might be helpful in certain contexts, like in extremely long German compound words, but not in every syllable, and not in English, in which vowels and sometimes consonants depend on the whole word to be correctly pronounced. I wouldn't utter a T in "often", but it was split to of-ten. --
Articles nearly unreadable
Mclarenfan17 (talk) 04:43, 21 March 2020

In the past couple of days, it appears that changes have been made to Wikipedia that have made articles almost unreadable on the mobile site. Words are now allowed to be broken up across lines, connected by a hyphen (I forget the name for this). This is perhaps most noticeable in tables, such as Template:Episode table. One of the column headers in this table is "Directed by", but with the recent changes, it now appears like this:

Dir-
ec-
ted
by

There are countless examples of this, and changing the orientation of the screen does nothing to fix it.

PrimeHunter (talk) 13:14, 21 March 2020
@Mclarenfan17: I moved your post to the existing section about the issue. The hyphenation may soon be removed. You can add the above CSS to remove it now for yourself.
Articles nearly unreadable
Mclarenfan17 (talk) 04:43, 21 March 2020

In the past couple of days, it appears that changes have been made to Wikipedia that have made articles almost unreadable on the mobile site. Words are now allowed to be broken up across lines, connected by a hyphen (I forget the name for this). This is perhaps most noticeable in tables, such as Template:Episode table. One of the column headers in this table is "Directed by", but with the recent changes, it now appears like this:

Dir-
ec-
ted
by

There are countless examples of this, and changing the orientation of the screen does nothing to fix it.

PrimeHunter (talk) 13:14, 21 March 2020
@Mclarenfan17: I moved your post to the existing section about the issue. The hyphenation may soon be removed. You can add the above CSS to remove it now for yourself.
Miscategorization into Category:Templates with short description
Andrybak (talk) 02:09, 26 March 2020
Template:Resolved For some reason, Template:Tlp causes the page Template talk:Dosanddonts to be added to Category:Templates with short description. I figured it out by commenting out parts of the page. However, Template:Parameter names example, Module:Parameter names example, and Template:Dosanddonts (passed as parameter into {{Parameter names example}}) don't seem to reference Category:Templates with short description in any way. Does anyone have an idea of why such miscategorization could happen? —⁠
Jonesey95 (talk) 03:11, 26 March 2020
{{Information page}} has an includeonly block at the top with short description code in it. It may be a good idea to wrap that code in a namespace-limiting selector template. See {{Namespace and pagename-detecting templates}} for options. –
Andrybak (talk) 03:46, 26 March 2020
@Jonesey95:, thank you. —⁠
Template Infobox Pandemic - aka Template:Infobox outbreak - is NOT working
Shearonink (talk) 04:16, 26 March 2020
Ok. The Infobox Pandemic template has something - something very wrong with it. The website linkage isn't working. All it says on every single article I have checked is Official Website. BUT the underlying URL isn't presenting. I don't want to try to tinker with it and break the myriad coronavirus-pandemic articles somehow. Help!
PrimeHunter (talk) 04:23, 26 March 2020
Fixed by [6] at Wikipedia:Help desk#website variable - Infobox pandemic. If you still see problems then purge the page. If that doesn't work then post a link to the article.
Shearonink (talk) 04:26, 26 March 2020
Nope, all is well. Working, YAY.
Template for identifying autoconfirmed users?
Sdkb (talk) 06:44, 24 March 2020
I'm looking to customize {{Afc talk}} so that, rather than saying "If your account is [autoconfirmed] you can create articles yourself", it gives advice specific to the user's situation. Is there a template similar to {{IsIPAddress}} that identifies autoconfirmed users?
SD0001 (talk) 09:44, 24 March 2020
Unfortunately, it is not possible for any template or module to check whether a user is autoconfirmed or not.
Izno (talk) 14:02, 24 March 2020
However, CSS can target groups. An example set is MediaWiki:Group-sysop.css. --
Sdkb (talk) 20:24, 24 March 2020
@Izno: Hmm, is that something I'd be able to use for what I'm trying to do? I'm not that familiar with CSS.
Izno (talk) 21:10, 24 March 2020
@Sdkb: Probably could depending on the tradeoff you have to make. Basically you'd put the text of the autocofirmed user and the non-autoconfirmed and then wrap them in the appropriate class, without any if statements or similar. The source wikitext has more "junk" in it but to a new user they should only see the content they care about on the page. --
Sdkb (talk) 23:33, 24 March 2020
@Izno: A little extra code sounds like a worthwhile tradeoff. (I'm assuming it would display based on the status of the viewer, not status of the user whose talk page it's on, so it'd display differently for AfC reviewers than reviewees. That's a bigger downside to me, but I'd still like to try implementing.) I found MediaWiki:Group-autoconfirmed.css, which is hopefully right, but I'm not sure where the group is for all non-autoconfirmed editors; is there a class for that or a way to code it?
Izno (talk) 01:07, 25 March 2020
I think MediaWiki:Group-user.css. I'm not super in tune to how these work but if you fuss with this stuff you can probably get there. --
PrimeHunter (talk) 01:33, 25 March 2020
There is no group for non-autoconfirmed editors but you can use the definition of :unconfirmed-show in MediaWiki:Group-autoconfirmed.css:

<div class="autoconfirmed-show">Only autoconfirmed viewers see this.</div><div class="unconfirmed-show">Only non-autoconfirmed viewers see this.</div>

This code produces:
Only autoconfirmed viewers see this.
Only non-autoconfirmed viewers see this.
unconfirmed-show starts out visible to everybody but is then hidden for autoconfirmed users by MediaWiki:Group-autoconfirmed.css. autoconfirmed-show starts out visible for everybody, is then hidden for everybody by MediaWiki:Common.css, but is then made visible again for autoconfirmed users by an !important override in MediaWiki:Group-autoconfirmed.css. If the CSS files are not loaded for a user then they see both, e.g. in safemode or in some republishers.
Sdkb (talk) 06:46, 26 March 2020
Thank you; that looks like it'll work!
Moxy (talk) 07:22, 26 March 2020
@ Sdkb: -I use Special:ActiveUsers has a count of their recent edits after their username plus group memberships is displayed. Also we have Special:Log/rights.--
Links to WP:SPI archives don't appear
RoySmith (talk) 16:19, 26 March 2020

WP:Sockpuppet investigations/Osatmusic was archived yesterday, but there's no link to the archive page. I noticed something like this a while ago, and was told it was just a cache problem. In that case, when I looked again, sure enough the link was there, so I assumed that was the case. But, this is 14 hours ago, which seems like more than long enough for any page cache to time out. Even odder, what links here doesn't show the archive page either. I only get:

User talk:Osatmusic ‎ (links | edit)
User:Osatmusic ‎ (links | edit)
User:Krealkayln ‎ (links | edit)
User talk:Krealkayln ‎ (links | edit)
Category:Wikipedia sockpuppets of Osatmusic ‎ (links | edit)

and if I directly query the pagelinks table, sure enough, that's all that's in the table. Any idea what's going on here? --

Xaosflux (talk) 16:36, 26 March 2020
Template:Worksforme @RoySmith: check now, you may need to clear your cache. —
RoySmith (talk) 17:01, 26 March 2020

@Xaosflux:, Yes, it's working for me now too. So, this is essentially the same experience I had the last time. Somebody else tries it, it works for them, and then it works for me too. That smells like some kind of cache invalidation problem inside the server stack, not my browser.

The other thing that's confusing is why the archive doesn't show up in the What Links Here listing. Looking at some other SPI pages, they're all like that. I guess the "< Wikipedia:Sockpuppet investigations‎ | Karmaisking" line at the top of the archive is generated in some way that doesn't create an entry in the pagelinks table? --

Xaosflux (talk) 17:14, 26 March 2020
@RoySmith: those links are coming from a included template, so it certainly can be server side caching, you can always WP:PURGE a page or give it a null edit to force a page recache. —
Xaosflux (talk) 17:18, 26 March 2020
Likely what is happening is: the content is edited out of the page, and the page is saved, but the new page created doesn't appear to have finished yet (or was very very close to the same time, in your example the same second) - so when it saves the page the template has nothing to do and saves it blank. The next purge or edit to the page would cause the template to recalculate. A way to avoid this is to make sure the new archive page is successfully created before removing the content from the original page. —
Template include size limit
Jc86035 (talk) 22:44, 24 March 2020

2019–20 coronavirus pandemic and 2019–20 coronavirus pandemic in Mainland China are still each exceeding the template include size limit, which is generally not a good thing because it produces reader-facing errors and we aren't supposed to like those. I've tried to reduce the template size of the former article (mainly from removing inline CSS), but I've only gotten about one quarter of the way there.

The main ways I could see the situation improving:

  • Replacement of {{Flagdeco}}, which would result in a reduction of about 300KB. The template is impressively inefficient and has multiple nested layers. I think it could be appropriate to just substitute all the instances, but it would probably be worthwhile to improve the template as well.
  • Replacement of {{Medical cases chart/Row}}, which suffers from the same sort of issue; its improvement would result in a similar reduction. It's based on {{Bar stacked}}, but I think it would be worth removing the dependency if it would improve the efficiency of the template.
  • Reduction of the navbox sizes, specifically {{2019–20 coronavirus pandemic}}. The navboxes at the bottom of the former article take up 500KB. This navbox in particular seems to be excessively large and would probably benefit from being split into several smaller navboxes. Removing the links to the data templates alone would result in a reduction of 130KB.

None of these seem like low-hanging fruit, since both templates are fairly convoluted and the navbox split might take a while and/or be difficult to organize, so maybe there are still other things that could be reduced first.

Ozzie10aaaa (talk) 23:22, 24 March 2020
navbox split isn't a bad idea...IMO--
Graeme Bartlett (talk) 00:27, 25 March 2020
I checked out substing the Flagdeco template; it needs to be substed 3 times, and some useless "if" statements cut to end up with a simpler file and size spec. It looks to be achievable if we do it in a sandbox end then replace the flags. There has been consensus to keep the flag images. I am prepared to this over a period of time if there is agreement here.
Jonesey95 (talk) 02:26, 25 March 2020
There was some discussion last month about reducing the size of flag templates at Wikipedia talk:WikiProject Flag Template. –
Johnuniq (talk) 02:27, 25 March 2020
There are 224 of flagdeco in the three sidepanel templates. I could readily replace them with much more efficient code that I would put in a module, however, previewing the 224 templates in a sandbox requires only "CPU time usage: 0.824 seconds" and not much else so I don't think working on them would help much. I'll look at the other options. I have fixed several large articles by replacing key parts with cut-down equivalents in a module and something will be needed here as the article requires "CPU time usage: 9.328 seconds" which makes edits very hard apart from the technical limit.
Jc86035 (talk) 05:52, 25 March 2020
@Johnuniq: I'm specifically looking at the template include size. I don't think the CPU usage is really relevant (that is, even a fast template could still be inefficient in terms of template size). Directly because of the layers of nesting, Template:Tlx becomes ~810 bytes when used in the articles but would become ~190 bytes if it were substituted all the way down. This is primarily where the improvement would come from.
Johnuniq (talk) 04:11, 25 March 2020

The current problem is that 2019–20 coronavirus pandemic has hit this limit:

  • Post‐expand include size: 2097139/2097152 bytes

1.2 MB of that comes from the templates in the following table.

Template Bytes Percent
{{COVID-19 testing}} 215,716 17.8
{{2020 coronavirus quarantines outside Hubei}} 120,753 10.0
{{2019–20 coronavirus pandemic data}} 423,354 35.0
{{2019–20 coronavirus pandemic}} 285,450 23.6
{{PHEIC}} 8,588 0.7
{{Health in China}} 17,649 1.5
{{Respiratory pathology}} 64,663 5.3
{{Viral systemic diseases}} 39,892 3.3
{{Pneumonia}} 7,485 0.6
{{Epidemics}} 25,151 2.1
Total 1,208,701 100.0

To fix the post‐expand include size problem, the only options are to remove some of the above from the article, or to include the wikitext of some of the above templates in the article. Those options are ugly and the second option would not save much. Commenting out {{2019–20 coronavirus pandemic data}} and previewing the page gives 2,002,635 bytes with all the other templates expanded.

Jc86035 (talk) 06:02, 25 March 2020
@Johnuniq: I would disagree with your assessment that these are the "only options", and I'm not sure how you would get to that conclusion. There is a decent amount of room to make many of those templates more efficient in terms of template size (I had already made two of them more efficient before starting this thread), and I think the reduction would be (barely) achievable without omitting any actual content. ({{Flagdeco}} is used in three of those templates.)
Jc86035 (talk) 06:08, 25 March 2020
You could even cut the navboxes' template size by close to a quarter just by invoking the modules directly from the navbox templates instead of using {{Navbox}}.
Johnuniq (talk) 06:16, 25 March 2020
@Jc86035: I replaced all 178 occurrences of {{flagdeco|...}} in {{2019–20 coronavirus pandemic data}} with the fully expanded output of one of the flags, then previewed the result at 2019–20 coronavirus pandemic. There was no significant change to the post‐expand include size in the result although it did manage to expand the first two navboxes. By "only options", I was thinking that things like halving the template expansion of the navboxes by directly calling the navbox module would not give enough benefit.
Jc86035 (talk) 06:25, 25 March 2020
@Johnuniq: If you previewed the result in the article, because it would be exceeding the limit both before and after the change, it would just display some number close to the maximum each time (if I'm understanding the situation correctly). Testing by previewing the template without the documentation and then doubling the number is probably more accurate.
Johnuniq (talk) 06:34, 25 March 2020
Yes, the expansion size was near the maximum but, as mentioned, it managed to expand the first two of the navboxes ({{2019–20 coronavirus pandemic}} and {{PHEIC}}). Replacing every Template:Tlf with its expansion would save the space from those two navboxes, namely 285,450 + 8,588 = 294,038 bytes. However, a bit more saving is needed to include all the navboxes, and the article is going to keep expanding.
Moshirk (talk) 13:15, 25 March 2020
headache.
Johnuniq (talk) 04:30, 25 March 2020
Jc86035 (talk) 05:54, 25 March 2020
@Johnuniq: It's probably because of the size of the documentation, which includes four real examples.
Brandmeister (talk) 12:41, 25 March 2020
Interestingly enough, a sandbox version shows the bottom templates normally. So the issue seems to occur only in article namespace.
Jc86035 (talk) 15:59, 25 March 2020
@Brandmeister: I replaced the {{flagdeco}} uses in two of the templates a few hours ago, which appears to have resulted in the issue being resolved (for now) on 2019–20 coronavirus pandemic. There may have been other contributing changes, such as the change to {{flagdeco}} from {{flagicon}} in {{COVID-19 testing}}, the change to use the navbox modules directly in {{2019–20 coronavirus pandemic}}, and changes to article content. Nevertheless, the page is still just 6% under the template limit, and 2019–20 coronavirus pandemic in mainland China still exceeds the limit, so I think it would be worth it to make further optimizations.
Brandmeister (talk) 16:31, 25 March 2020
Thanks, looks normal so far, at four visible bottom templates.
Ahecht (talk) 18:40, 26 March 2020
I worked on 2019–20 coronavirus pandemic in mainland China a bit, hiding the map (which was 500,000 bytes by itself) and {{2019–20 coronavirus pandemic}} (which wasn't displaying anyway), and directly calling the Graph extension directly instead of using {{Graph:Chart}}. However, the page is still at 90% of the maximum post-expand include size, mainly due to the sheer number of {{Cite web}} templates on the page. --
Category view in watchlist
Beetstra (talk) 06:15, 26 March 2020

I have the option to show additions/removals to categories turned on (or better, the inverse not turned off). That gives now a line in my watchlist:

  • [revid] [user] ([talk] [contribs] [block]) [article] added to category

I, severely, miss there a diff of the edit that caused that change in the category. I now either have to click 'contribs', and find the edit of the editor, or click 'article', load it's history, and there find the edit. If one editor made multiple edits to the same article, the actual edit that caused the category change is quite a job to find. For maintenance categories, you sometimes want to be able to revert the edit that caused the categorisation change, not having to do that manual. Are there any options to add a 'diff' link to these lines in the watchlist? --

PrimeHunter (talk) 15:20, 26 March 2020
Click the revid to see the revision. Then click "diff" to the left of "Previous revision" at the top to see the diff for the revision (there is no diff link if it's the page creation). This works for all revision links. There is also a diff link to the next and current revision. You appear to have enabled "Group changes by page in recent changes and watchlist" at Special:Preferences#mw-prefsection-rc. It's worse if it's disabled. Then you get unlinked "(diff | hist)" (this is phab:T148533) and have to do the contributions or history search you mention.
Beetstra (talk) 19:59, 26 March 2020
@PrimeHunter:, thanks. I feel some coding coming up ...
One suggestion for the UI
Lightbluerain (talk) 18:01, 25 March 2020
Hello, i have a suggestion for the wikipedia's UI. When we have a long article needing a task done, it is frustrating to read it upto the end then scroll back up to click on the edit button or publish button. I wanted to suggest that can't we have a scroll up button at the bottom right corner in every wikipedia page? I think this will help a lot.
BrandonXLF (talk) 20:04, 25 March 2020
@Lightbluerain:, I would suggest you take a look at User:BrandonXLF/ToTopButton. The page has instructions to help you install the user script.
Jonesey95 (talk) 21:45, 25 March 2020
Your browser or operating system probably also has a keyboard shortcut for "go back to the top of this window". On my computer, for example, it is "command-up arrow". –
Elizium23 (talk) 06:25, 26 March 2020
Yeah, "Home" or "Ctrl-Home" should work on non-Macs. (except for mobile devices. If anyone knows how to go to "top of page" on Android Lollipop, I'm dying to know it.)
Lightbluerain (talk) 08:38, 26 March 2020
User:BrandonXLF Thanks a lot. It worked.
Jonesey95, Thanks, but I use wikipedia on mobile.
Elizium23, read User:BrandonXLF's comment. It worked for me.
TheDJ (talk) 08:43, 26 March 2020
Elizium23 "I use wikipedia on mobile". If you use iOS, you can double tap on the top of the screen, and it will jump to the top in any app or website. Its a not well known trick but once you know it, you won't stop using it. —
Mandarax (talk) 21:03, 26 March 2020
For another option, see User:Numbermaniac/goToTop (and its companion User:Numbermaniac/goToBottom.js).
How do I find this?
Sdkb (talk) 06:45, 26 March 2020
I'd like to propose a change to the automated notification new editors receive after creating their account and the one they receive after making their first edit. Is there a page buried deep in MediaWiki or something where I could propose such a change?
Malyacko (talk) 10:31, 26 March 2020
@Sdkb: Depends. Do you talk about email notifications or about in-browser notifications? Do you want to change the default in the MediaWiki software itself, or only for English Wikipedia? To change the in-browser first-edit one, see the "notification-header-thank-you" strings in the English translation of the Echo/Notifications extension at phab:diffusion/ECHO/browse/master/i18n/en.json (and see mw:How to report a bug if you want to propose changes). To change the account creation email on English Wikipedia only, see MediaWiki:Confirmemail_body. (PS: For future reference and as a courtesy to readers, please avoid "this" in summaries but name things, as nobody can know what "this" is about until having read the complete comment - thanks a lot!) --
PrimeHunter (talk) 14:33, 26 March 2020
We haven't customized any of the messages after the nth edit.[7]
Sdkb (talk) 16:39, 26 March 2020
@PrimeHunter: the proposal is to have the on-wiki welcome notification link to Help:Introduction to Wikipedia instead of WP:Getting started (the intro page that's just a list of every other intro page). I can't find any code linking to WP:GS on the MediaWiki page, but @Clovermoss: looked through her past notifications and found that's where it links currently.
PrimeHunter (talk) 16:55, 26 March 2020
@Sdkb: I once added the link Allmessages from the API to WP:QQX. It can be used to search all MediaWiki messages when you know a string. This is MediaWiki:Notification-welcome-link. The available pages for a link depends on the wiki so a change should be here and not in MediaWiki itself. The default is empty.
Sdkb (talk) 17:48, 26 March 2020
Thanks so much; you all are wizards! I'll make the proposal sometime soon at VPP for visibility, but glad to have somewhere concrete to point to.
Moxy (talk) 20:42, 26 March 2020
Will not support a change to an accessibility nightmare set of pages. Not sure you get it yet! --
PrimeHunter (talk) 21:22, 26 March 2020
I don't think any of the earlier posters have participated in past discussions. See Wikipedia:Village pump (proposals)/Archive 130#New accounts - initial notification for links.
Sdkb (talk) 22:27, 26 March 2020
Thanks, I'll take a look!

@Moxy: I had a feeling you'd oppose this haha. You've made your views very clear, and please trust that I am reading all your comments/the resources you're linking; I sincerely appreciate the attention you're giving to the proposals, and I think it's made them stronger. On some matters like the buttons we are just going to have to disagree; I've already responded and don't have anything else to add beyond them. Regarding the new issue of accessibility you've started bringing up, my thought is that, by adding some fragment tags to Template:Intro to and then using excerpts, it should be possible to create a readable single-page display of the intro to series that could be linked from the menu. Overall, I'm not sure the technical pump is the best place for us to get into this discussion too deeply; we'll both have a chance to have our say if I get around to making it a proposal.

Moxy (talk) 23:08, 26 March 2020
I am really of the oppnion you need much more experience before proposing whole sale changes all over. It's wonderful you found some help pages to fix recently but I need you to read what people are saying and poating about accessibility and reader retention. Take some time to learn some coding and what new ediotrs do and need and how people navigate pages. What your proposing is the forth multiple page tutorial and the biggest one to date. We know how bad this work for our readers because we have already done this three times. Only change I would support would be a link to Help:Content as its the page with the most community input and discussion over the years. As has been said a few times now...slow down learn and test.....don't just push what you think looks pretty...learn what works. Pls review Wikipedia usable for the blind? --
Dated statements
Brandmeister (talk) 16:47, 26 March 2020
Jonesey95 (talk) 17:11, 26 March 2020
This was caused by misuse of the {{As of}} template in Template:2019–20 coronavirus pandemic data. I have fixed those instances. It seems like that template could use some error checking. –
Brandmeister (talk) 20:14, 26 March 2020
Category:Articles containing potentially dated statements from 26 March 2020 has now reappeared in the article, perhaps someone is messing up the template.
Jonesey95 (talk) 23:37, 26 March 2020
It looks like AnomieBOT fixes this problem, as long as there is an editing gap long enough for the bot to make an edit. –
punctuation and piping
Ohconfucius (talk) 09:02, 26 March 2020

A fellow editor has remarked that the markup

Template:Brackets,

often ejects the comma to a new line on a mobile display, and thus has attempted to circumvent this sort of behaviour by using the piped markup

Template:Brackets

As I see it, this manner of piping ought not to be necessary and is probably in breach of WP:PIPE. Incidentally, the colleague contacted me because I have a script that removes redundant piping and which has disturbed their workaround. I have not personally observed the line ejects as described, but if such ejects are "normal behaviour" for the MW software, this may be an issue for the wider community. Any comments would be welcome --

TheDJ (talk) 12:10, 26 March 2020
I hardly ever encounter this behaviour on iOS. But yes, browsers sometimes do this. They break at word boundaries when needed and because , cannot be part of a word (especially when precede by a link) it is a valid ‘breakpoint’, especially when CSS word-break:break-all is used to avoid long words from overflowing outside of the viewport (used on mobile). When width is constrained, you might see it more often, simply because more breaks are required. Making workarounds is not the right strategy however and first we should check the exact spot and determine which browser does this and how often it happens. —
Izno (talk) 13:19, 26 March 2020
Broadly agree with TheDJ, but I would see this as generally inappropriate. --
PrimeHunter (talk) 14:21, 26 March 2020
Me too. Template:Tlx is another workaround which may be better but I don't like cluttering up the wikitext for it. It prevents wrapping inside the whole link. Whether this is desirable may depend on the length and other factors.
Edwin of Northumbria (talk) 15:58, 26 March 2020

Many thanks for the comments, especially those of PrimeHunter re. another possible workaround. To TheDJ, I note that wben using Chrome on Android it's very common to see hanging punctuation marks as I described, therefore I would certainly say this an issue which needs addressing. To Ohconfucius, I don't see anything on WP:PIPE to suggest that the two workarounds now suggested are proscribed. What the guidelines do mention are cases where an entire word or phrase is piped to create an internal link and that link pertains only to part of the text in question. The rationale of prohibition here is clearer, since in theory a reader might be given the impression that they're being redirected to an article on one subject (possibly as yet unwritten) when in fact it concerns quite another. However, I've never encountered this situation in practice and would argue that editors should be able to exercise their own judgement in such instances (with the caveat, perhaps, that WP:PIPE advises the exercise of caution). Punctuation inside piped internal links is a different matter entirely, since it doesn't introduce this kind of ambiguity.

Since, when I see poor typesetting on any web page I immediately begin to question the reliability of its content, my personal opinion is that it's actually rather important to avoid hanging punctuation, aside from the fact it just looks nasty!! I can't immediately think of a way of avoiding this unless the same style is applied to the word immediately preceding a punctuation mark (as far as I'm aware the "hanging-punctuation" property has, to date, only only been implemented by Safari). A workaround would therefore seem the only course of action, though it may not be ideal – in life, the "least worse" solution to most problems involves some degree of compromise

Any further thoughts on this subject would be most welcome.

{

Ohconfucius (talk) 16:38, 26 March 2020
To those who have observed the line break between a word and its trailing comma, I'd ask whether it occurs in the absence of a link (Edwin implies this although I don't see why it should happen Template:Wink), and whether it happens on other websites of reliable sources. --
TheDJ (talk) 16:53, 26 March 2020
@Edwin of Northumbria: please file a bug with Chrome mobile. Doing weird workarounds in our content isn't sustainable long term solution honestly especially when it involves problems that are device specific. "Punctuation inside piped internal links is a different matter entirely, since it doesn't introduce this kind of ambiguity" No, but it does other things, like telling google search and Apple dictionary or screen scraping bots, that the display value of a link contains a comma. Also consider that your personal 'least worst' might be still be someone else's 'most worst'. If we don't complain with browser vendors then the problem will never be fixed and there will be a never ending workload of adding commas to links and then desktop users removing them again. —
PrimeHunter (talk) 17:51, 26 March 2020
For the record, [[Primal Scream]], produces this html in both desktop and mobile: <a href="/wiki/Primal_Scream" title="Primal Scream">Primal Scream</a>, . Wrapping before the comma looks like a poor browser decision.
Edwin of Northumbria (talk) 02:56, 27 March 2020

 Ohc ¡digame! and TheDJ – Further research indicates that the problem occurs only when a link is involved and is not limited to Chrome! The same issue arises with Firefox, Opera and MS Edge (I didn't check other browsers). I'm not sure if they all use the same display engine, but it would logical if they did and my tests appear to confirm this. None of the aforementioned software exhibits the same behaviour in Windows 10, so it seems more appropriate to file an Android bug report. If TheDJ concurs with my analysis then I'll do so ASAP.

Thanks to TheDJ for your observations re. bots, but are they really that stupid? (I'm not disagreeing with you, I'm just surprised) Could you point me to some documentation on this, especially with respect to the use of"display names"?

{

Infobox Images with transparent areas needing a different background color
T3g5JZ50GLq (talk) 02:57, 26 March 2020

For article Made with Code, image File:MwC_Logo-pink.png should have a black background

is this possible ?
http://web.archive.org/web/20180308230527/http://www.madewithcode.com/
Template:Infobox organization
Module:Infobox
Module:InfoboxImage says:
{{#invoke:InfoboxImage | InfoboxImage | image={{{image}}} | size={{{size}}} | maxsize={{{maxsize}}} | sizedefault={{{sizedefault}}} | upright={{{upright}}} | alt={{{alt}}} | title={{{title}}} | thumbtime={{{thumbtime}}} | link={{{link}}} | border=yes | center=yes | page={{{page}}} }}
Andrybak (talk) 14:52, 27 March 2020
@T3g5JZ50GLq:, you could also replace it with a logo variant, which expects a light background. In your link to the Wayback Machine, the logo in the top left corner might be suitable. —⁠
Duplicate IP address to mine? UPDATE: Answer, No. New question: Why is Wiki alerting me when edits from IP addresses similar to mine are posted, before I'm logged in?
Swliv (talk) 23:04, 21 March 2020
There seems to be a duplicate IP address to mine; or mine's being used somehow also by another IP-number editor (vandal-type edits). I don't know how to confirm my own IP address from my machine but I get occasional messages when I come onto Wiki without being logged in. (My IP number as triggered/used by this imposter (?) is 12-digits -- four groups, three digits each; which per IP address doesn't necessarily look legit.) (Somehow there's only one recorded edit to this 12-digit IP number currently, and the edit is recent; but I've certainly encountered (a) similar situation(s) one or more time(s) before, too.) Does this outline a familiar problem at all, with a solution? Thanks!
SQL (talk) 23:27, 21 March 2020
@Swliv:, Four groups, 3 digits each sounds exactly right for an IP address. Each group can be 1 to 3 digits, from 0 to 255. There are a lot of reasons that this could happen. Mobile internet for one, or perhaps a shared internet such as at a university, business, library, or other similar places. Could also be someone else at the same house.
HiLo48 (talk) 23:30, 21 March 2020
IP addresses are not fixed. Solution? Register.
SQL (talk) 23:33, 21 March 2020
@HiLo48:, The original poster is registered. Their concern is that they are seeing their ip being misused recently.
SD0001 (talk) 06:20, 22 March 2020
@Swliv: you can get to know your (public) IP address simply by googling my ip
Swliv (talk) 20:02, 27 March 2020
UPDATE: @SD0001:, @SQL: and @HiLo48:, you gave me enough to motivate more work and I can now dismiss my first hypothesis, the 'duplicate' one. To start again from scratch: I've now identified two earlier cases in 2017 and 2019 respectively similar (but with different IP addresses) to the recent one that prompted the above. From all this and the Google search tip: No one that I can identify has a duplicate IP address to mine. The three 'false alerts' I'll now call them each matched my IP address for the first two trios of figures but the second two trios or doublets of numbers had NO matches (in sequential order or not) with my address. (I have only a public address as far as I can tell. I am not part of any private 'shielding' net that I know of.) So somehow it now seems Wikipedia is directing false alerts to me based on a 'resemblance'. It has occurred to me that another current device and other now-defunct devices of mine are or have been associated with my same user name; but the false alerts all appeared specifically when I was not logged in. The new question seems now to be why. If anyone has insight into the new puzzle, I'm again open to trying to solve it. Thanks much.
Getting text to display on mobile only
Sdkb (talk) 05:08, 27 March 2020
Apologies I've been asking a bunch of technical questions here recently, but one more: what do I wrap a span of text with so that the text displays only to mobile users? (I'm looking to link editors who visit tutorials on mobile to Wikipedia:Editing on mobile devices.)
Evad37 (talk) 12:43, 27 March 2020
@Sdkb: You can use Template:Tlx -
Sdkb (talk) 00:04, 28 March 2020
Thanks! I'll add a see also link to that template at Help:Mobile access so that the next editor doesn't need to come here. Feel free to revert if there are bean-spilling concerns.
Why does this link show up?
Rehman (talk) 09:22, 26 March 2020
Hello. If you open Template:Infobox power station in edit mode (not visual editor), you can see "Geothermal well" being listed under "Pages transcluded onto the current version of this page" (under the edit box). Any idea what part of the code is causing that? I can't seem to figure out.
Johnuniq (talk) 09:38, 26 March 2020
All I can work out at the moment is that the transclusion is on the doc page: Template:Infobox power station/doc.
PrimeHunter (talk) 13:52, 26 March 2020

The doc page calls the template with qid = Q693330 and this triggers it in data22. Here is a minimal example where it stops if any part is removed: {{#invoke:WikidataIB |getValue |P527 |qid=Q693330 |fetchwikidata=ALL|onlysourced=no}} . The code produces: Script error: No such module "WikidataIB".

This page now transcludes Geothermal well. wikidata:Q693330#P527 says "geothermal well". Module:WikidataIB by RexxS transcludes this name. I don't know why.

RexxS (talk) 16:49, 26 March 2020

When the function in WikidataIB retrieves the value of the property Template:Q from Template:Q, it finds "Q61846297". That has to be transformed into a readable English term, linked to an article, where possible. So the function looks at Template:Q for a sitelink. If one is found, it uses it: no problem. But what if there is no sitelink (as in the case of Q61846297? Then it looks for a label (and finds "geothermal well" in this case). So can we link to an article using that label? There are three remaining possibilities: (1) the article of that name is a dab page, so we don't link; (2) the article of that name is a redirect, so we do link; (3) there is no article of that name, so we don't link. The only way to determine which of the three possibilities is correct is to examine the title object for that name, and unfortunately, that marks the page that the title object refers to as transcluded onto the page that calls it, whether the page that the title object refers to exists or not. So we either accept the erroneous transclusion, or we lose the functionality to link redirects. That would mean that we would lose two of the three links from calls like the one which returns Template:Q from Template:Q:

Only Anthropologist exists as an article on enwiki, so has a sitelink from Template:Q. The other two are redirects with no sitelinks from Wikidata, and that situation is quite common here. --

PrimeHunter (talk) 17:32, 26 March 2020
@RexxS: Thanks for the detailed explanation. Red links under the list of transcluded pages is often a sign that something is wrong and should be examined. The module could start with an #ifexist check on the target page and stop if the page doesn't exist. I don't know Lua but in templates, #ifexist does not affect the source page and only causes a WhatLinksHere entry (as a link and not transclusion) for the target page. That is less likely to cause concern.
RexxS (talk) 18:21, 26 March 2020
@PrimeHunter: The code (lines 630-639) already calls artitle = mw.title.new( label_text, 0 ) which returns a title object or nil if it doesn't exist in mainspace, so we know whether the article exists without any of the false links caused by #ifexist. However, you still have to check if artitle.redirectTarget exists, which is the cheapest way of distinguishing a dab link from a redirect at that point. It's examining that property that makes the false transclusion. I think you'll find that it's far better to have an false transclusion than a false file link, as bots look for the latter and report them. I've already been down that route, and believe me, the current algorithm is the result of lots of debate and amendment. Cheers --
PrimeHunter (talk) 18:59, 26 March 2020
@RexxS: I meant to only stop if no page exists. If it exists then it's OK to examine what it is. You say mw.title.new( label_text, 0 ) returns nil if it doesn't exist in mainspace. If that's how mw.title.new works then I don't see why you have to use artitle.redirectTarget when the page does not exist. But mw:Extension:Scribunto/Lua reference manual#mw.title.new says it only returns nil "If the text is not a valid title". I think this means it e.g. contains disallowed characters and not that there is no page by that name. As said, I don't know Lua but maybe it does have to either "link" or "transclude" the page to examine whether it exists. If we prefer to "transclude" then the documentation could mention it. I searched Module:WikidataIB for "transclu" while examining the original post.
RexxS (talk) 19:45, 26 March 2020

@PrimeHunter: I just checked and you're right about the meaning of "valid title". The code it affects is line 633 if artitle and artitle.redirectTarget .... I can change that to if artitle.exists and artitle.redirectTarget ... which will not evaluate the second part if the first is false, i.e. we don't get the false transclusion, but we do get the false file link by testing artitle.exists, so I don't think we're any better off, sorry. If you'd like me to make the change, it's just a moment's work, but I think we'll simply get the same complaints about file links instead of transclusions every time we do this:

What do you recommend? --

PrimeHunter (talk) 02:18, 27 March 2020
@RexxS: I only know Wikipedia:Most-wanted articles which is updated around once yearly, usually by Bamyers99. Are there other reports of red links to mainspace?
RexxS (talk) 00:50, 28 March 2020
@PrimeHunter: I've had a look around and I can't see any other relevant reports. I was relying on my memory for a couple of years ago when an earlier version of the code used the id and isRedirect object properties, and I had complaints about entries appearing in the "What links here" section of articles as file links. Template:Pb I've now found the discussion at Module talk:WikidataIB/Archive 3 #New parameter for getValue sought to avoid attempt to resolve redirects. This was @ferret:'s last observation: Template:Tq. My reply now seems quite prescient: Template:Tq Complaints to Anomie, then. --
Instant Search Suggestion disappeared
Ergo Sum (talk) 04:17, 27 March 2020
Some time ago, my search box on the Wikipedia homepage stopped working (www.wikipedia.org), i.e. no drop-down suggested searches appear when I type in the search box. Perhaps this was the result of a change in the default account settings; perhaps it is an issue that can be resolved by wiping my history and cache? I do not know. If someone who is more technologically proficient than I can assist, it would be much appreciated.
DannyS712 (talk) 05:27, 28 March 2020
@Ergo Sum: Please try purging your cache and trying again. If the issue persists, can you provide what web browser you use, and any errors that appear in the javascript console?
Ergo Sum (talk) 07:05, 28 March 2020
@DannyS712: I've just cleared my cache and no luck. I'm using Chrome. I see 6 errors in the console: cookie was set without "SameSite" attribute (x2), deprecated ResourceLoader (x3).
DannyS712 (talk) 13:18, 28 March 2020
@Ergo Sum: I have the SameSite x2 but no deprecated ResourceLoader; filing a phab task now. Can you paste the content of the output there?
Ergo Sum (talk) 16:19, 28 March 2020
@DannyS712: Sure, can you link me there?
DannyS712 (talk) 16:33, 28 March 2020
Rdcheck tool not working
Steel1943 (talk) 20:32, 25 March 2020
Seems that the Rdcheck tool is not working at the present time. Whenever a page is checked for incoming redirects, the search fails due to python coding errors.
Steel1943 (talk) 16:15, 26 March 2020
Does anyone know what is going on with this and/or if/when/how this is getting fixed?
Steel1943 (talk) 18:32, 27 March 2020
...?
EdJohnston (talk) 17:12, 28 March 2020
This use of Rdcheck to find incoming links to the Combinatorics article seems to work.
Steel1943 (talk) 17:32, 28 March 2020
Awesome, it's working again.
Something is wrong with {{sfn}} and {{sfnp}}
Mikeblas (talk) 18:15, 28 March 2020
Looks like Template:sfn has become badly broken. It is making red links and duplicate reference errors in tens of thousands of articles. Templates which use it in turn (like {{Listed building England}}) are also having problems. This seems to affect tens of thousands of articles. Is a fix coming, or is this some disruptive change that editors must manually repair in each affected article? --
Schazjmd (talk) 18:18, 28 March 2020
Schazjmd (talk) 18:25, 28 March 2020
Sorry, should have pinged @Mikeblas:.
Mikeblas (talk) 18:31, 28 March 2020
That discussion seems to be about problems where the sfn template isn't given a correct anchor for the reference. As far as I can tell, the Listed building England template correctly specifies a ref parameter. In the Harty article, for example, each of the cited buildings has the expected "CITEREFHistoric_England463505" (for example) anchor. The page still has six different "sfnp error" messages, and a "duplicate ref def" message, to boot. Maybe I'm missing some connection, but this issue seems far more than the change discussed at the Administrator Noticeboard mentions. --
Schazjmd (talk) 18:46, 28 March 2020
@Mikeblas:, it's all connected. The discussion started at User_talk:Trappist_the_monk#sfn broken?. I don't understand templates enough to understand what happened, I just know I was horrified last night when I saw the state of one of my articles. I got it cleaned up, but that's why when I saw the discussion, it caught my attention.
Izno (talk) 18:42, 28 March 2020
I would recommend reviewing template talk:Sfn in the future. It has the answers you're seeking. --
Unfilled template parameters
Brandmeister (talk) 09:50, 29 March 2020
In Template:Infobox heteropolypeptide some parameters look mandatory and when unfilled are shown like unhelpful {{{protein_type}}}, {{{subunit1}}} etc (eg. when data itself is deficient), as seen in Hemolithin. Is it possible to make them optional?
PrimeHunter (talk) 12:08, 29 March 2020
The creator hasn't edited since 2016. I don't know the subject but I made everything optional except subunit(n), gene(n), locus(n) when SubunitCount > n-1.[8] Is that OK?
Brandmeister (talk) 14:51, 29 March 2020
Thanks, seemingly works now.
Want to start scripting
RealFakeKim (talk) 16:50, 27 March 2020
(ColinFine from the Teahouse suggested I post this here) Hi. I would like to start writing user scripts for Wikipedia. I've read through the guide but I'm more of a visual learner so I was wondering if you could point me in the direction of any videos etc or any experienced user willing to walk me though it. Thanks,
Xaosflux (talk) 13:28, 30 March 2020
@RealFakeKim: you should start by looking over Wikipedia:WikiProject JavaScript and Wikipedia:User scripts. Our programming is primarily using: javascript and cascading style sheets. If you are not familiar with these concepts and languages, W3Schools (external site) is a good place to start learning for free. —
QEDK (talk) 13:38, 30 March 2020
Adding a bit that W3Schools is a good place to start, but it's also one of the worse sites to learn from overall. For basics, learn from Codecademy (it is only for basics imo) and then gradually use resources like freecodecamp.org (which is great) and Mozilla Developer Network (which is verbose). --
Let's update all our COVID-19 data by bot instead of manually
Kaldari (talk) 23:13, 26 March 2020
Johns Hopkins University has created a public data set of all COVID-19 infections/deaths/recovered/active for all counties, states, and countries, updated and archived daily and sourced to reliable sources. While our editors have been doing an admirable job of updating all our numbers manually, the effort has not been 100% reliable or consistent, plus there's a pretty good chance you'll get an edit conflict when you try to edit the data because so many people are messing with it. I would like to propose that someone write a bot to automatically pull the data from the John Hopkins data source and use it to update tables such as Template:2019–20 coronavirus pandemic data and Template:2019–20 coronavirus pandemic data/United States medical cases by state. It could even be used to update the county lists in state outbreak articles, but that's probably lower priority. Any thoughts about this? Any volunteers?
Mike Peel (talk) 23:36, 26 March 2020
I can write python/pywikibot code that will do the updates, but it's important that it fetches the information from a source that the community trusts, and that doing bot updates is what the community wants. I can set this up tomorrow if that would help. Thanks.
SquidHomme (talk) 23:41, 26 March 2020
I support that, but as you know, sometimes the JHU numbers lags behind compared to the reportings from individual countries/states/sovereignty as in the case for Template:2019–20 coronavirus pandemic data so I propose that the bot not only citing the JHU solely, but instead citing plurality of sources from different country like local sources like this one from Italy for example.—
Izno (talk) 23:42, 26 March 2020
The license statement at the bottom of that Github readme is not compatible with CC by SA (the license is non-commercial). --
Kaldari (talk) 23:47, 26 March 2020
I keep forgetting that some countries consider data to be copyrightable 🤮! I'll try to get in touch with the folks that produce the data to see if they will change the license.
GreenC (talk) 23:53, 26 March 2020
Yes unfortunately it says "copyright 2020 Johns Hopkins University, all rights reserved". I think in cases like this we allow for periodic usage within citations per fair use, but not systematically copying large parts of the database into Wikipedia ie. we are not competing with JHU by hosting their data here. It says at the top they acquired some of the data from public sources, so if we can determine those sources and use that data, JHU can not copyright PD data such as from the govt --
Doc James (talk) 23:55, 26 March 2020
Different sources come to different approximations of the underlying actual number. OurWorldinData compares three major sources over time. Johns Hopkins, ECDC, and WHO.[12]
Wugapodes (talk) 00:00, 27 March 2020
@GreenC: Raw data is not eligible for copyright in the United States and is therefore in the public domain. Regardless of their assertion, they do not have copyright and US law is clear that "facts that were discovered" are not sufficiently creative to qualify for copyright protection (see Feist v. Rural) no matter how much work went into compiling them.
Wugapodes (talk) 00:10, 27 March 2020
But do note that GreenC's is correct in saying the "systematically copying large parts of the database into Wikipedia" is not allowed. Copyright extends to the specific presentation format of the data, and so simply copy and pasting it is a violation of copyright law. So long as the data are rearranged or modified, there is no copyright infringement from simply using the underlying public domain data. Doing this to create a competing product is allowed and not infringing.
Wugapodes (talk) 00:18, 27 March 2020
@Mdennis (WMF): would WMF Legal be able to provide guidance on this?
Kaldari (talk) 00:24, 27 March 2020
Since both John Hopkins University and Wikipedia are based in the U.S., do we really need to worry about European database rights in this case?
JSutherland (WMF) (talk) 16:34, 27 March 2020
@Wugapodes: Maggie's not in the office at the moment but I can chase this up.
Wugapodes (talk) 21:06, 27 March 2020
Thanks @JSutherland (WMF):, anyone from WMF legal would be helpful, I just chose Maggie as the first person from the list since Template:Noping's userpage says it's for taking actions not getting in touch with the team.
Kaldari (talk) 21:55, 27 March 2020
WMF Legal took a look at this and recommend seeing if Johns Hopkins will freely license their information. According to WMF Legal, while data can't be copyrighted in the U.S., choices in arrangement, selection, and presentation of the data can be copyrighted, which may apply here given what appears to be fairly significant human curation in putting the dataset together. I'll follow-up with my original plan of reaching out to Johns Hopkins directly and see if they would be willing to share the data under a free license.
Wugapodes (talk) 23:55, 26 March 2020
Template:Noping has been approved for trial to do something similar to this already. See Wikipedia:Bots/Requests for approval/WugBot 4. If there is consensus here, I can update the request and modify the bot to update more pages. Or, alternatively, we could probably write a lua module which parses the already available CSV data into a wikitable (I would even bet someone has already written a module like that). I don't have a strong opinion on the proposal, but am willing to help if there's consensus for it. Template:Ec
Doc James (talk) 23:58, 26 March 2020
It would be nice to have a table were people can choice, do they want to see WHO, ECDC, or John Hopkins numbers. Maybe also through in WorldOMeter. Or the highest of the three.
AntiCompositeNumber (talk) 00:29, 28 March 2020
It seems like @Tedder: has some sort of tool that they're using to create and maintain the case charts. I have no more information about it, since there's no BRFA or documentation. --
Tedder (talk) 03:30, 28 March 2020
Hey, yeah, after doing Oregon manually and spending a lot of time with data and sources, I realized coronavirusapi.com was auditable and matched my data. So I ran against Florida and then expanded to all the other 'missing' states. I'm keeping it updated manually, and only updating pages that aren't being updated (or corrected) by others (example). It'd love it if someone replaced it with better data, this was the best I've been able to gather.
GreenC (talk) 00:17, 27 March 2020
What are they doing over at Wikidata with Covid data? --
Jc86035 (talk) 07:33, 28 March 2020
@GreenC:, it looks like it's being entered manually for each region with two data points on each item for each day (the data entry seems to be occurring at irregular intervals). All of the items for each country and territory are linked from Template:Q.
Kaldari (talk) 22:09, 30 March 2020
I finally found a data source that is under an actual free license. Everything at https://covidtracking.com/ is under the Apache license (and they will soon be switching to CC0). They have an API and datasets on GitHub. However, they only cover data for the United States.
Months
Dimon2712 (talk) 16:11, 29 March 2020
Hi! When I translated one Template:Interactive COVID-19 maps/Cumulative confirmed cases to Ukrainian I had a problem. System "translated" 12/03/20 as Mar 12. But March in Ukrainian won't be March. So I want to ask you in which page in English wiki I can find this process and fix it in Ukr. wiki. Thanks!--
PrimeHunter (talk) 16:52, 29 March 2020
@Dimon2712: Where do you see 12/03/20 or Mar 12? It transcludes {{Interactive COVID-19 maps/common}} which says "03/28/20" and "Mar 28" directly in the source.
Dimon2712 (talk) 17:03, 29 March 2020
@PrimeHunter: thanks, but in source is only Mar 28. I said about dates at all. Please look at this template in ukwiki. If you drag the circle you will see that "Бер" was changed to "Mar"--
PrimeHunter (talk) 17:56, 29 March 2020
@Dimon2712: OK, I see it when dragging the circle at uk:Template:Interactive COVID-19 maps/Cumulative confirmed cases. Dates like 03/12/20 (I still don't see 12/03/20 anywhere) are stored in {{Interactive COVID-19 maps/data/global Confirmed covid cases by date-csv}}. It isn't listed under transcluded templates when mw:Extension:Graph reads it in {{Interactive COVID-19 maps/Cumulative confirmed cases}}. I don't know how date formatting works in the extension.
Yair rand (talk) 00:22, 30 March 2020
@Dimon2712: Unfortunately, I think this is blocked by phab:T100444. --
PrimeHunter (talk) 01:12, 30 March 2020
You can get month numbers instead of names by changing timeFormat('%b %d',scaledHandlePosition) in the local {{Interactive COVID-19 maps/common}}. Based on Date and time notation in Ukraine you may want timeFormat('%d.%m',scaledHandlePosition). This gives 12.03 instead of Mar 12. '%d.%m.%y' gives 12.03.20.
Dimon2712 (talk) 08:29, 30 March 2020
@PrimeHunter: thank you! I've done it.--
PrimeHunter (talk) 18:07, 30 March 2020
I have changed the template to automatically use month numbers in non-English wikis.[13]
Wugapodes (talk) 23:57, 30 March 2020
Thanks for the hard work everyone. As a PSA, if anyone is looking for a new project, mw:Graphoid still uses Vega 2 despite the latest major version being Vega 5. As I've been developing the interactive maps, a lot feature requests are hard or impossible in Vega 2, and lots of people would appreciate work to update the Graphoid back end. See phab:T223026.
Can you help with improving the Coronavirus epidemic maps?
John Cummings (talk) 15:51, 28 March 2020

Hi all

The maps on the Coronavirus pandemic articles need some technical help which has been documented on phabricator, I think that some of the tasks relate to templates, so please take a look.

If you're not able to assist with any of the tasks please still subscribe to it, it will help software engineers understand there is community support for fixing these issues.

Thanks very much

Bawolff (talk) 19:25, 28 March 2020
As a dev, I can asssure you, we do not prioritize bugs based on number of subscribers or even look at number of subscribers. If you want to demonstrate support, award a token. We don't usually look at those either, but it has like a 5% chance of being noticed vs subscribers list which has like 0%.
AntiCompositeNumber (talk) 20:06, 28 March 2020
Looking at that task, I'm not even sure what it is you want to do that you can't. --
Snaevar (talk) 22:59, 28 March 2020
Some of those tasks in bug T248707 can be done with an mw:Extension:Graph map, see mw:Extension:Graph/Demo.--
John Cummings (talk) 15:22, 30 March 2020
Hi @Snaevar: could you please write on the phabricator task how they could be acccomplished? It seems like the first issue is knowing which bit of software the bug is coming from. Thanks,
Snaevar (talk) 01:38, 31 March 2020
There is no point in doing that. Like Aklapper said in the bug, this forum right here is to ask for individual maps or improvements to them, while phabricator is for things like missing features (not an exhaustive list). In this case, Template:Interactive COVID-19 maps/Cumulative confirmed cases goes as far as can be done for now, while the remaining points, apart from the last three, will only be possible once T223026 gets fixed.--
Source code edit
1233 (talk) 12:13, 31 March 2020

Hi,

I am curious the reason for the 2010 wikitext editor (with editing toolbar on) in affecting the editing area, as the source code became styled unnecessarily. The only way for me to fix that is to dissable the editor. Are there anyone experiencing the same issue as me?--

PrimeHunter (talk) 12:19, 31 March 2020
It sounds like Wikipedia:Syntax highlighting. You probably clicked a highlighter marker button Codemirror-icon.png to the left of "Advanced" in the toolbar.
1233 (talk) 13:19, 31 March 2020
Yeah found it and disabled. Thanks.--
An inter-linguage link missing?
Jojodesbatignoles (talk) 13:28, 31 March 2020
Hello to all the confined wikiistes on the Earth. The missing link is about this article in French fr:Réacteur_nucléaire_naturel_d'Oklo in which the inter-language link towards English is ineffective.
I insist on the fact that the article in English, Natural nuclear fission reactor, approxitively about the same subject, has got a link towards the equivalent in French that is effective. I already sent this problem to your French equivalent here: fr:Wikipédia:Le_Bistro/26_mars_2020#Lien_inter_langues_manquant_(suite) , but I'm not able to understand the bla-bla (rabbiting) they write.
Thank you for your explainations (if I understand them, then I'll try to repair the thing) or for fixing the bug.
Xaosflux (talk) 13:49, 31 March 2020
@Jojodesbatignoles: our article links are inherited from wikidata here: wikidata:Q64470499#sitelinks-wikipedia; you should be able to update the links there. —
Jts1882 (talk) 13:50, 31 March 2020
Template:Edit conflict There seem to be two wikidata items (one capitalised): Natural nuclear fission reactor (d:Q12029714) and natural nuclear fission reactor (d:Q64470499). — 
PrimeHunter (talk) 14:04, 31 March 2020
Template:Q is about the general concept of a Natural nuclear fission reactor. Template:Q is about the only known example, in Oklo, Gabon. There appears to be huge overlap between the content of the articles. No language has an article of both types so maybe they should be merged at Wikidata. The English article uses {{interwiki extra}} to add links to articles of the other type with {{interwiki extra|qid=Q12029714}} in Natural nuclear fission reactor#External links. The French Wikipedia has a similar template so without merging the Wikidata items you could add {{interwiki extra|qid=Q64470499}} to fr:Réacteur nucléaire naturel d'Oklo. Then the English and French article would both link all the other articles, but many of the other articles would still only link their own type.
Discussion at Talk:Visual pollution#Old revision is shown for logged out users
Andrybak (talk) 16:33, 31 March 2020
 You are invited to join the discussion at Talk:Visual pollution#Old revision is shown for logged out users. —⁠
Appeal for Peer Review
Guywan (talk) 20:11, 26 March 2020

I have recently finished a user script which would help file movers when moving files. See the request here. From WP:File movers:

The script is called FileMoverHelper, and can be found here. Here is a synopsis:

  • Get file move destination.
  • Move file to destination.
  • Remove {{Rename media}} template from destination (if there is one).
  • Find backlinks and redirect them to the new file name.

I appeal for peer review in order to prevent any unnecessary disruption that a faulty script of this kind may cause. Kindly leave any comments on the talk page. Regards,

BrandonXLF (talk) 04:24, 27 March 2020
@Guywan:, you should also match file names not in links such as images in infoboxes, this will also allow you to match the Image: namespace that might be used. I don't think people are including filenames as text on pages. Your regex should also match uppercase and lower case first letter as well as underscores instead of spaces and it should escape regex such as .s. I would just do something like new RegExp('[' + source.charAt(0).toUpperCase() + source.charAt(0).toLowerCase() + ']' + mw.util.escapeRegExp(source.slice(1)).replace(/[ _]/g,'[ _]')). You should use list=imageusage to get all image uses.
BrandonXLF (talk) 04:28, 27 March 2020
Your input replacement and gallery regex should also account for a lowercase f in file: and there's not need to use `${destination}` just use destination.
Guywan (talk) 18:02, 27 March 2020
@BrandonXLF: Thanks for the help! I knew there would be some problems with the regular expressions. Template:Tq You're probably right about that; I'm nothing if not paranoid. Do you think the gallery regex is even necessary?
BrandonXLF (talk) 06:03, 28 March 2020
@Guywan:, it's no longer needed. For part 4 of the script, you should use continue/iucontinue to replace all results. When the call is finished, if result.continue is present, you should make the call again using result.continue as the start of the new object you pass to mw.Api. If result.continue is not present, then there's no more calls that need to be made. You should also consider the edit rate limit when making the replacements. You can get the edit rate from meta=userinfo passing uiprop=ratelimit. It will be contained in query.userinfo.ratelimits.edit.type_of_user.hits and query.userinfo.ratelimits.edit.type_of_user.seconds. Store hits/seconds as editrate. You can then run an interval with an interval of 1000/editrate that will check if there's an edit to be made in the queue and it will make that edit else it will clear the interval.
Guywan (talk) 21:54, 28 March 2020
@BrandonXLF: Thanks again. I've implemented your suggestions. Do you know where the rate limits of different user groups are defined?
BrandonXLF (talk) 23:03, 28 March 2020
@Guywan:, the API returns the correct edit rate for the user calling the API, although I'm not sure what it does for admins who don't have an edit rate limit from what I can tell. The settings are at https://noc.wikimedia.org/conf/highlight.php?file=InitialiseSettings.php.
SD0001 (talk) 04:49, 29 March 2020
For admins, it doesn't return anything (data.query.userinfo.ratelimits is an empty object - just tried it out on the testwiki). So at present the script would give syntax error if an admin uses it.
Guywan (talk) 18:05, 31 March 2020
Template:Fixed Right now, I can't think of anything else that needs to be done, except for the script to be tested by a file mover.
Steel1943 (talk) 18:26, 27 March 2020
@Guywan: I'm not much of a scripting expert, but I know enough to bring up a concern. So, in your script, I see no references specifically for replacing file links (referencing "Template:Tq" in your items above) that may contain an "Image:" or "Media:" prefix instead of the traditional "File:" namespace prefix. Well, specifically, I don't see the words "Image" or "Media" anywhere in the script, which is why I assume that my aforementioned concern is valid. I have this concern because in regards to at least the "Image:" file link prefix, it is still being used on several pages' file links. Is this addressed in the script?
Guywan (talk) 18:52, 27 March 2020
@Steel1943: Your concern is valid. I think that, working from BrandonXLF's feedback, I have addressed it. I did not consider the "Image:" prefix, because I made the assumption that editors would behave and not use it. I wasn't aware of a "Media:" prefix; where is it used?
Steel1943 (talk) 19:59, 27 March 2020
@Guywan: See the first section of Help:Files for a bit more information than I'm obviously providing about the "Image:" and "Media:" prefixes. Regarding the "Image:" prefix: maybe the script could be designed to find the applicable file links that contain the "Image:" prefix (and either retain the "Image:" prefix or replace the prefix with "File:", but from what I recall, doing the latter through any sort of automation is controversial) when doing the redirect replacements. And in regards to the "Media:" prefix: I don't recall ever seeing any currently active file links with the "Media:" prefix, but it's probably a "better safe than sorry" situation to consider links with the "Media:" prefix. (Also, per Help:Files, it looks as though and file links which use the "Media:" prefix should continue to use the "Media:" prefix even if a redirect bypassing occurs.)
SD0001 (talk) 05:23, 28 March 2020
@Guywan: For your script to load reliably, you need to make sure that the dependencies are loaded before the script code is run. You can do this by replacing $(() => with
$.when($.ready, mw.loader.using(['mediawiki.util', 'mediawiki.api', 'mediawiki.user', 'mediawiki.notify'])).then(() =>
Guywan (talk) 21:54, 28 March 2020
@SD0001: Thanks for the help. I've never used mw.loader.using() in any of my scripts, and they seem to run fine. Could you clarify what it means to load reliably?
SD0001 (talk) 04:46, 29 March 2020
Not a big deal. But all userscripts and gadgets do this. It just ensures, in the unlikely scenario where your script has loaded before the mediawiki.* dependencies have loaded, that your script waits for the dependencies to load before starting off.
DannyS712 (talk) 05:30, 28 March 2020
Template:Closing
RoySmith (talk) 18:00, 31 March 2020
Could somebody who understands template magic look at Template:Closing. The {{#if:{{{admin|}}}...}} is obviously supposed to be switching between an admin close vs a NAC, but it always comes up NAC. --
QEDK (talk) 18:28, 31 March 2020
@RoySmith: Looks like it's working? See Special:PermaLink/948377314. --
RoySmith (talk) 18:34, 31 March 2020
@QEDK:, It didn't work for me on this AfD. I get the "An editor is in the process of closing this discussion." version, with the link to WP:NAC. Maybe somebody took away my mop and didn't tell me? --
QEDK (talk) 18:38, 31 March 2020
@RoySmith: You didn't pass in anything in the Template:Para parameter so it got parsed as empty making the #if-case false. Try something like Template:Tlx. --
RoySmith (talk) 18:42, 31 March 2020
@QEDK:, I'll try that the next time I use it, but I don't recall ever having this trouble before.I see @Wugapodes: made a recent change to the template, maybe that caused the change in behavior? I note the template documentation says, "This template takes no parameters."; maybe that's just wrong? --
QEDK (talk) 18:47, 31 March 2020
@RoySmith: Yes, that's the change which introduced the new parameter, the documentation wasn't updated after that change, previously it said "administator or editor", now that's dependent on the parameter. --
Wugapodes (talk) 18:49, 31 March 2020
Yes, that was the purpose of the change I made. I felt the previous wording of "admin or other editor" was redundant. If there's a need to specify that the closer is an administrator, it should state that unequivocally, otherwise it defaults to the catch-all term "editor". I guess I forgot to change the documentation. Template:Ec
RoySmith (talk) 18:51, 31 March 2020
@Wugapodes:, I've gone ahead and reverted that, under the 'if it ain't broke, don't fix it" principle. --
Wugapodes (talk) 18:56, 31 March 2020
No problem, seems you get more use out of it than I do anyway.
Rotated file at Church of Saint Mary of Jesus
Ymblanter (talk) 18:06, 31 March 2020
Does anybody understand why the file in the infobox appears to be 90 degrees rotated? The original on Commons is not rotated.--
QEDK (talk) 18:32, 31 March 2020
@Ymblanter: It looks OK for me, can you confirm if the issue still exists. For anyone else, no recent changes to the file, article or the infobox, and nothing else that I can check comes to mind. --
Ymblanter (talk) 18:33, 31 March 2020
Yes, for me it still exists. I checked indeed that there have been no recent changes.--
QEDK (talk) 18:34, 31 March 2020
Which browser version, device and operating system are you on? --
RoySmith (talk) 18:38, 31 March 2020
@Ymblanter:, It looks correct to me, but I've seen something like this before. I vaguely remember it being cache related, so perhaps try all the usual suspects of emptying your browser cache, purging the page, etc? --
Ymblanter (talk) 18:43, 31 March 2020
Thanks. I have never visited this page before, so I do not quite understand where the cache problems could come from, but I can indeed try to wait for several days,. which is typical expiration time for image cache.--
QEDK (talk) 18:52, 31 March 2020
Client-side (browser) caches can be cleared anytime (setting dependent on browser). Purging the server-side cache is done on a "as needed" basis, the purge action will do that instantly. --
RoySmith (talk) 18:57, 31 March 2020
@QEDK:, Multi-level caching is 1) a good way to make things efficient and 2) a good way to make things confusing :-). I suspect the sequence of events is something like: 1) the server cached the wrong version. 2) Ymblanter viewed the page and now his browser cached the wrong version as well. 3) The server-side cache got invalidated and refreshed so it now had the correct version. 4) You and I looked at the page and saw it correctly, even though Ymblanter is continuing to see the stale version out of his browser cache. --
QEDK (talk) 19:06, 31 March 2020
The idea is that the server will be serving the image (now) with a different cache header, which would effectively tell the browser that the image stored in its cache is out-of-date and the browser would ideally re-download the new image (all modern browsers will atleast), so unless it's a particularly ancient browser with strict time-based cache headers, who knows. --
Ymblanter (talk) 19:17, 31 March 2020
I have a standard latest version Firefox with Linux Manjaro, nothing special.--
RoySmith (talk) 19:28, 31 March 2020
@QEDK:, Well, that's certainly the theory. In practice, however, I see enough problems where the fix is to manually purge the cache, that I have to assume something is broken in the WikiMedia server-side cache management implementation. --
QEDK (talk) 20:04, 31 March 2020
My latest Firefox (macOS Catalina 10.15.2) shows fine as well. Wikimedia (Varnish? I think it's called) caching is indeed not perfect, just see category updating and you'll see how ineffective it can be. For the sake of posterity, I've now tried it on Chrome/Safari/Firefox on Windows/macOS via Responsive Design Mode and I am unable to replicate the issue. Calling it a night. Template:): --
Ahecht (talk) 19:10, 31 March 2020
@Ymblanter: Seems likely it was caused by the Orientation field in the image's EXIF data saying it should be rotated 90 degrees. I re-uploaded the image with that field corrected. --
Ymblanter (talk) 19:17, 31 March 2020
Great, thanks. It still shows rotated to me, but hopefully now this is just a cache problem.--
TheDJ (talk) 21:41, 31 March 2020
@Ahecht:, this problem happens if EXIF and XMP-EXIF are not in sync (which is something the editing/exporting software is responsible for). The thumbnailing server and the browser can then make different choices in deciding what the 'correct' orientation is. —
Script for showing only the red lines in lists
Ark25 (talk) 02:01, 31 March 2020

Template:Moved from A list can contain links to Wikipedia articles and then is it possible to make a script for filtering the list and to remove the lines not containing red links?

For example this list.

Say for example these 4 lines:

  1. Big Springs - Big Springs, Ohio
  2. Billingstown - Billingstown, Ohio
  3. Birds Run - Birds Run, Ohio
  4. Birmingham - Birmingham, Ohio

After applying the filter, only the 2nd and the 3rd line should be visible (Billingstown and Birds Run are red links at this moment):

  1.  
  2. Billingstown - Billingstown, Ohio
  3. Birds Run - Birds Run, Ohio
  4.  

Or, to make it easier, say the list contain only one link per line (red or blue):

  1. Big Springs
  2. Billingstown
  3. Birds Run
  4. Birmingham

The result would be:

  1.  
  2. Billingstown
  3. Birds Run
  4.  

The script should be a gadget or a Greasemonkey script or maybe even a bookmarklet. What's the easiest way to implement such a filter and where should I ask for help for someone to create such a script? Thanks. —

Redrose64 (talk) 07:26, 31 March 2020
This is more of a WP:VPT question. --
BrandonXLF (talk) 18:43, 31 March 2020
@Ark25:, I have two versions javascript:$('.mw-parser-output li').each(function(){if(!$(this).find('a.new').length)this.innerHTML=''}) and javascript:$('.mw-parser-output li').each(function(){if(!$(this).find('a.new').length)this.style.display='none'}). The first one makes the content of the list items without redlinks disapear whereas the second one makes list items without redlinks disapear.
Ark25 (talk) 22:45, 31 March 2020

Template:Reply to These are really great scripts, thank you! The second is more useful for me because the result is more compact but both are great scripts.

Is there any chance you can make another script to remove elements not containing a certain string? Say for example the string is "ir" - then only the last 2 items in the list would remain. Thanks in advance. —

BrandonXLF (talk) 23:28, 31 March 2020
@Ark25:, would something like javascript:$('.mw-parser-output li').css('display','');if(str = prompt('Please enter a string to search for:'))$('.mw-parser-output li').each(function(){if(!$(this).text().includes(str))this.style.display='none'}) work?
Ark25 (talk) 23:40, 31 March 2020
@BrandonXLF: It works really great, it's amazing what JavaScript can do these days! Yesterday I've posted the same question on Stackoverflow too, can I/you post the script there too? Not sure why the script doesn't work in that particular page but it works very well here on Wikipedia. —
BrandonXLF (talk) 23:56, 31 March 2020
@Ark25:, sure. The script I gave you is designed for Wikipedia only since there's parts of the UI on Wikipedia that aren't a part of the page content, a more general script would be javascript:(function(){for(var a=document.getElementsByTagName("li"),b=prompt("Please enter a string to search for:"),c=0;c<a.length;c++)a[c].style.display=a[c].innerText.includes(b)?"":"none"})().
Ark25 (talk) 00:08, 1 April 2020
@BrandonXLF: Again, a great script, yes it works very well, everywhere. I've posted it on StackOverflow. Thank you very much! —
PrimeHunter (talk) 02:36, 1 April 2020
Many things can also be done with regex in the normal source editor and toolbar. For the simple case with only a link, click "Advanced" in the toolbar and then the search icon to the right. Enter \[\[(.*)\]\] at "Search for" and {{subst:#ifexist:$1||[[$1]]}} at "Replace with:". Checkmark "Treat search string as a regular expression" and click "Replace all". Then save the page, or preview if that's enough for your purpose. ifexist is an expensive parser function so at most 500 are allowed at a time.
Rhododendrites (talk) 00:46, 1 April 2020
For those of us who don't really know what we're doing, how can we make use of this script? —
Ark25 (talk) 04:14, 1 April 2020

@Rhododendrites: Just add a bookmark in your browser and instead of providing an URL (web link), put the script there. To add a bookmark, right click on the bookmark bar. If you don't see the bookmark bar then you probably don't have any bookmark in it so add the first one with (CTRL+D).

Also, this is what Google search on "how to make bookmarklet" returns:

In Chrome, click Bookmarks->Bookmark Manager.

You should see a new tab with the bookmarks and folders listed.

Select the “Bookmarks Tab” folder on the left.

Click the “Organize” link, then “Add Page” in the drop down.

You should see two input fields. ...

Paste the javascript code below into the second field.

Number of RFCs started per user
WhatamIdoing (talk) 01:51, 31 March 2020

The WP:RFC process has a Tragedy of the commons problem: we can accommodate a bad or unnecessary RFC here or there, but the more RFCs we have, and the lower the average value of the RFC, the less anyone wants to participate at all, or the less thoughtful their responses will be.

We are discussing some ways to improve this. One of the proposals is to limit the number of RFCs an editor can open in a month. The idea is to limit the "outlier" behavior, from the small handful of editors who create many RFCs, rather than to bother the ordinary editor (median RFCs started per year = 0). It seems to me that even among people who use the RFC process, it's unusual to create more than about three RFCs in a month. But I'd rather have "official records" instead of just telling people that I think most people won't start more than a couple, and that creating 10 in a month (which a now-banned editor did, a year or two ago) is a highly unusual and probably bad event.

I wonder whether anyone here could find out just how many RFCs each editor has started during a month (e.g., via database dumps). For some years, the RFCs have all been given a unique id number, and they're all listed at the RFC subpages. What I imagine would happen is that you search for the id number and then record the first username/link and the date after the RFC id (a few are signed with just a date, but it's not been common in recent years). The id numbers should prevent problems with double-counting a single RFC multiple times (e.g., if it's listed as history and science).

What I'd like probably looks like this:

RFC record
User Highest number opened in 30-day period
Alice 2
Bob 1
WhatamIdoing 5

Is that possible?

QEDK (talk) 06:59, 31 March 2020
@WhatamIdoing: The RfC ID is only for currently-running RfCs (according to Legobot tracking Template:Tlx usage). If we track RfCs with 7-hexadigit hashes similar to Legobot, by ~2300 RfCs, we would reach the number of RfCs where the collision rate will be too high (people will start noticing, bot will be confused) to use the hashes for any realistic purpose. Simply put, even if you want the hashes as a tracking mechanism in the future, you have to change the bot hash function to produce longer hashes. --
WhatamIdoing (talk) 16:56, 31 March 2020
qedk, I thought that the id numbers basically started at zero and counted up in order. If there are only ~2300 numbers in use, then I think that would get approximately two years' worth of data. That would be a good starting point. (Also: this is a one-off to validate my impressions. I don't need daily updates forever or anything like that.)
QEDK (talk) 18:23, 31 March 2020
@WhatamIdoing: The actual numbers would ideally be 16^7 (total range of the function: 0000000->FFFFFFF). At around ~2300, you'd probably start noticing collisions, where two RfCs have some probability of having matching hashes. Either way, two simple ways would be: a tracking bot to go after Legobot's Template:Para while they are in-use. The other is to make Legobot itself do the job. --
Redrose64 (talk) 19:14, 31 March 2020

The rfcid numbers are not re-used. If desired, I can rig up a demonstration of why they must not be re-used, except when the second use is on the same page that the first use occurred.

Legobot maintains a permanent table of rfcids that it has issued in the past, and it uses this when generating a new rfcid to ensure that there is no collision (if you're interested, the table is described here, it's the first one of the five). So 16^7 are certainly possible, although as time goes by the process of generating a fresh rfcid will slow down as the collision rate increases. Some rfcs have had more than one rfcid: there are several reasons that this might happen, and the most obvious one is when a Template:Tlx tag is removed (perhaps by Legobot itself, due to thirty days having elapsed) and then re-added without the Template:Para parameter (perhaps to extend the rfc). Another reason is so that an RfC may need to be removed from an inappropriate category - for example, if somebody uses Template:Tlx without any parameters at all, Legobot will put it into Wikipedia:Requests for comment/Unsorted, and the only way of getting it out again, without actually ending the RfC, is to remove the Template:Para and simultaneously add a valid category, as I did Template:Diff - Legobot followed that up with these edits. --

QEDK (talk) 20:24, 31 March 2020

I tried finding the source code, seems you're better at web-sleuthing than me! Doesn't look like the table maintains record of revisions/filer/description, so that's a bit unfortunate. OTOH, I did get to see a good bot of 'ol programmer humour:

$page->addSection('HELP! PLEASE!','Something is very wrong. I\'m having trouble generating a new RFC id. Please help me. --~~~~');
		die();

@Redrose64: SELECT count(rfc_id) FROM rfc WHERE rfc_id=?; This counts the number of entries in the rfc_id column with one character? I don't understand it. --

BrandonXLF (talk) 04:32, 1 April 2020
@QEDK:, in mysqli when a ? is added to a prepare statement, it is replaced later on when execute is called or when bind_param is called, it acts as a placeholder. In this case it's checking for a duplicate rfc_id that is the same as $tempid in the database because the ? is replaced with $tempid.
QEDK (talk) 04:55, 1 April 2020
@BrandonXLF: Thanks a ton! That clears it up a bit: generate random MD5 hash, truncate to 7 chars, and check for the hash in rfc_id, in case the function is colliding with existing output, it does that upto 5 times, eventually messaging Chris G asking for help. Neat-o, but should really use open addressing/chaining instead of retrying the hash function imo. --
Button coding with Lua
Sdkb (talk) 06:30, 1 April 2020
We'd like some help over at Wikipedia_talk:Teahouse#Suggestions_for_improving_the_Teahouse_design getting a custom button to be able to display a "pressed" state when linking to the page it's on the same as Template:Clickable button 2 already does. Would anyone who knows Lua be able to help? (ctrl+f for "depressed" and start reading from there.)
BrandonXLF (talk) 01:12, 2 April 2020
@Sdkb:, Lua is not needed for this, all Lua does is create wikitext. What you would do show is "pressed" state is check if the current page name matches the target of the button and if it does you would add a class and use templatestyles to add CSS to make the button look pressed or you would just add inline styles (this is not recommended since there's more than one button).
Lots of first level sections are not collapsible
SharabSalam (talk) 06:39, 27 March 2020
Template:Tracked Hi, in Legality of bestiality by country or territory article there are lots of first second level sections that are not collapsible. Can someone see where the problem is? I am using my phone and I can't find the problem.--
RoySmith (talk) 14:08, 28 March 2020
@SharabSalam:, Can you give us more details? Are you looking at this with a normal web browser on your phone? Does the URL contain "en.wikipedia.org" or "en.m.wikipedia.org"? Or are you using the Wikipedia App, and if so, Android or IOS? And, specifically, which section head do you think should be collapsed? --
SharabSalam (talk) 08:15, 29 March 2020
This is happening in my phone. It is android and I am using Chrome. Lots of first second level sections in that article are not collapsible. The second third level sections have underlines which is not how a normal second third level section should look like. Here are some screenshots to illustrate the problem.
Screenshot A
Screenshot B
  • Screenshot A is from the article. Notice that there is no downwards/upwards arrow and that there is an underline under the second third level section header which is not normal.
  • Screenshot B is from my sandbox. This is how sections should look like.
  • All of the sections in that article are like this except I think two sections at the end. One is "Notes" and the other is "History". This is annoying especially because there is a very long section that is not collapsible called "National law" and you have to scroll down for 15-20 seconds to reach the end of the section.--
PrimeHunter (talk) 13:13, 29 March 2020
I see the same in Safari on an iPhone. A level 2 section has "==" and level 3 has "===". It's level 2 which isn't collapsible and level 3 which has underlines that aren't normally there. I have a collapse arrow at "Notes", "History" and "Zoophilic pornography" but the latter only collapses the first line with Template:Tlx. On the mobile version in desktop with Firefox I see the three navboxes in [14] so something odd is going on. I don't see them in preview and I never see navboxes on articles in the mobile version. They aren't supposed to be in the html in mobile so it's not just an unloaded CSS statement to hide them. A null edit didn't change anyhting. None of the unusual things appear in preview so it's difficult to investigate.
SharabSalam (talk) 18:45, 29 March 2020
I corrected my comment. Changed "first" to "second" and "second" to "third".--
RoySmith (talk) 15:38, 29 March 2020

@SharabSalam:, Wow, this is really weird. I made a copy of the page in my userspace and started hacking away at chunks. Eventually I got down to this diff causing the problem to appear or not. I'm totally mystified.

What this smells like is running into some kind of size limit rather than any actual broken markup. I'm not familiar with the code base, but I could imagine something like reading the next X kb of text to see if you can find the start of the next level-2 section, and that failing if the next section head is further away than that. But that's just pure speculation. --

RoySmith (talk) 15:47, 29 March 2020
BTW, the maximum page size per WP:CHOKING is about 2 MB, so we're not even close to that. Also, I was reproducing this on my desktop by explicitly viewing the mobile version (i.e. en.m.wikipedia.org), so it's not specifically a mobile browser problem; it's happening on the server side. --
RoySmith (talk) 15:48, 29 March 2020
OH, and thank you for the screenshots. That was really helpful. --
RoySmith (talk) 16:04, 29 March 2020

More on this being a server-side issue, the h2 tags are being generated differently. For example a bad one and a good one:

<h2 class="section-heading"><span class="mw-headline" id="Public_opinion">Public opinion</span></h2>
<h2 class="section-heading collapsible-heading open-block" tabindex="0" aria-haspopup="true" aria-controls="content-collapsible-block-3"><div class="mw-ui-icon mw-ui-icon-mf-expand mw-ui-icon-element mw-ui-icon-small mf-mw-ui-icon-rotate-flip indicator mw-ui-icon-flush-left"></div><span class="mw-headline" id="Zoophilic_pornography">Zoophilic pornography</span></h2>

but's that not surprising given what I already found. --

SharabSalam (talk) 18:45, 29 March 2020
Thanks for working on this. @PrimeHunter:, do the navboxes still appear in this version of the article in RoySmith's user space? Just to know if these problems are related.--
PrimeHunter (talk) 20:44, 29 March 2020
The navboxes appear before the edit but not after when viewing the mobile version with Firefox on a desktop. After the edit they are also omitted in the html as expected.
BrandonXLF (talk) 22:54, 30 March 2020
It has to do with size because no matter what I remove, as long as I remove enough of the page, the page parses properly on the mobile site. Both halves of the page work properly when split, 1 and 2. The edit that fixed the issue in RoySmith's userspace reduced the size of the page from 112867 bytes to 112834 bytes.
BrandonXLF (talk) 23:14, 30 March 2020
It has to do with the number of files (at least partly, maybe it as to do with the size of the file links) because all RoySmith did was remove a file to fix the issue and I removed a different file here [15] and it also fixed the issue. This revision with a lot of files [16] has the same issue aswell.
BrandonXLF (talk) 01:15, 31 March 2020
I was correct, MobileFrontend sets MFMobileFormatterOptions.maxImages to 1000 and since there's over 1000 images, the server doesn't apply the mobile formatting.
RoySmith (talk) 15:46, 1 April 2020
@SharabSalam: I don't know if you're following the phab ticket, but there was an interesting suggestion that using flag emojis (https://flagpedia.net/emoji) might be a possible workaround. --
SharabSalam (talk) 22:57, 1 April 2020
@RoySmith:, I had this idea in my mind. I will see. By the way, I can't enter Phabricator. I always get internal server error. Only if I used another WiFi network or used a proxy I can enter. I don't know why. This has been the case since I joined Wikipedia.--
SharabSalam (talk) 23:46, 1 April 2020
It is now fixed [17]. I replaced X and question marks with emojis. Thanks everyone for your help.--
RoySmith (talk) 01:30, 2 April 2020
@SharabSalam:, Well, I'm glad we found something that got you up and running. Weird about not being able to access phab. If you can give me some more details (URL you're accessing, error message you get, etc) I'll see if I can figure out what's up with that. --
SharabSalam (talk) 03:12, 2 April 2020
  • The message is like this:

Template:Quote

It's not a big deal. It seems that it only happens when I am using my WiFi network because when I use another WiFi network or a proxy I can enter that website. If I wanted to enter that website I just use a proxy.--
Eliminating certain users' edits from your watchlist
The Rambling Man (talk) 20:37, 1 April 2020
This may have been asked before, and I'm dreadfully sorry if so, but is there a way to remove specific users' edits from your watchlist?
Jorm (talk) 20:53, 1 April 2020
Something like this:
let wl = document.querySelector('ul.special');
for (let li of wl.querySelectorAll('li')) {
	let un = li.querySelector('bdi');
	if (un.innerHTML === 'BobTheUser') { li.css.display = 'none'; }
}
(untested).--
The Rambling Man (talk) 21:03, 1 April 2020
And sorry to be a dunce, but where do I add that code please?
Yair rand (talk) 21:10, 1 April 2020
The code is broken in various ways, so I'd recommend against using it. (Normally, JS code can be added to Special:MyPage/common.js.) Any code to remove particular listings would need to work somewhat differently depending on what settings one is using. --
The Rambling Man (talk) 21:41, 1 April 2020
Well I'm just using standard monobook.js and would like to remove one specific disruptive user from my watchlist if that's possible.
Yair rand (talk) 22:30, 1 April 2020
@The Rambling Man: Specifically the relevant settings are the "Use non-Javascript interface" option in Special:Preferences#mw-prefsection-watchlist and the "Group changes by page in recent changes and watchlist" in Special:Preferences#mw-prefsection-rc. Assuming both options are at the default state (unchecked), then, using Jorm's code (slightly modified below to use the correct style keyword and to run after load) should work:
$( function () {
	let wl = document.querySelector('ul.special');
	for (let li of wl.querySelectorAll('li')) {
		let un = li.querySelector('bdi');
		if (un.innerHTML === 'BobTheUser') { li.style.display = 'none'; }
	}
} );
(Replace "BobTheUser" with the appropriate username to be blocked.) This should work, so long as the watchlist JS itself isn't doing anything crazy, which it might. --
The Rambling Man (talk) 22:37, 1 April 2020
Fantastic, @Yair rand:, much appreciated.
Xaosflux (talk) 14:02, 2 April 2020
Just FYI, using js to do this will literally just "delete the line" from the screen, if you are not using "grouped" option and "normal user" makes a change, then "hidden user" makes a change to a page, it won't be able to show the "normal" change on your WL - it will just be empty. —
Template not working
Capankajsmilyo (talk) 06:32, 31 March 2020
Kindly check Template:Graph:PageViews. It is not working as expected. Thanks
QEDK (talk) 07:01, 31 March 2020
Just needed a purge. Probably something MW-side. --
Capankajsmilyo (talk) 16:29, 2 April 2020
PS Talk:Mahavira. Not yet resolved.