View discussions
Project:Village pump (technical)/Archive 180
Counting red links
talk) (I created a template for the {{Honda Sports Award}}, which has 620 entries.
I exported to Excel so I could see how many names were duplicated, but I would also like to count red links.
While I could do so manually, I would like to monitor the count over time, so I'd like to know if there is an easy way to count them.--
talk) (document.querySelectorAll('a.new').length
<-- gets you all of them
cheers.--
document.querySelectorAll('a.new').length
<-- gets you all of them
cheers.--
talk) (If you want just in that template, it gets more complicated:
let count, container = document.querySelector('div[aria-labelledby="Honda_Sports_Award"]'); if (container) { count = container.querySelectorAll('a.new').length; }
But if you just run it from the template's page, the first one will work. --
If you want just in that template, it gets more complicated:
let count, container = document.querySelector('div[aria-labelledby="Honda_Sports_Award"]'); if (container) { count = container.querySelectorAll('a.new').length; }
But if you just run it from the template's page, the first one will work. --
Help understanding this Javascript.
talk) (Hi everyone. I've recently discovered this Javascript code that recognises bolded keywords and adds symbols in front of them. I made a fork out of it on my common/js
by adding icons and altering keywords. Here is my understanding of it:
- Under
var vs[[:Template:=]]
, each icon corresponds to an integer starting from 0, with each new line being incremented by 1.
- Under
var la[[:Template:=]]
, each item associates a string with the icon's integer.
I'm having some success with changing and adding icons and strings, but my question is how would it prioritise which string to use? Say I have different icons that I want to use for two terms, "delete" and "speedy delete". Is there a way to suppress the delete icon when "speedy delete" is typed? --
Hi everyone. I've recently discovered this Javascript code that recognises bolded keywords and adds symbols in front of them. I made a fork out of it on my common/js
by adding icons and altering keywords. Here is my understanding of it:
- Under
var vs[[:Template:=]]
, each icon corresponds to an integer starting from 0, with each new line being incremented by 1. - Under
var la[[:Template:=]]
, each item associates a string with the icon's integer.
I'm having some success with changing and adding icons and strings, but my question is how would it prioritise which string to use? Say I have different icons that I want to use for two terms, "delete" and "speedy delete". Is there a way to suppress the delete icon when "speedy delete" is typed? --
talk) (var vs= ...
actually makes a giant string where "+" is string concatention. var vt=vs.split("#");
then splits the string at each "#" to make an array with numbering from 0. The result is as you say. The script is currently based on individual words and selects an icon for each matching word. If you set an icon for 'speedy' alone then you get both the speedy icon and the delete icon when the page says "speedy delete". Is that OK? You would similarly get two icons for "speedy keep".
var vs= ...
actually makes a giant string where "+" is string concatention. var vt=vs.split("#");
then splits the string at each "#" to make an array with numbering from 0. The result is as you say. The script is currently based on individual words and selects an icon for each matching word. If you set an icon for 'speedy' alone then you get both the speedy icon and the delete icon when the page says "speedy delete". Is that OK? You would similarly get two icons for "speedy keep". talk) (@PrimeHunter:, would it be appropriate to essentially consider it as an OR statement? If the test string "speedy keep" exists, then
- the "keep" icon activates
- the "speedy keep" icon recognises "keep" but waits on a "speedy" somewhere in the string
- the "speedy keep" icon recognises speedy as well and activates
I also noticed that it doesn't appear to do exact string matching. I'm guessing the script doesn't allow for it? --
@PrimeHunter:, would it be appropriate to essentially consider it as an OR statement? If the test string "speedy keep" exists, then
- the "keep" icon activates
- the "speedy keep" icon recognises "keep" but waits on a "speedy" somewhere in the string
- the "speedy keep" icon recognises speedy as well and activates
I also noticed that it doesn't appear to do exact string matching. I'm guessing the script doesn't allow for it? --
talk) (That is not how it works. A "speedy keep" icon doesn't recognize or activate anything. Icons must be associated with a single word and they activate on any bolded string containing that word. A "speedy" icon activates on a "speedy keep" comment. So does a "keep" icon. Both icons will be shown. Capitalization doesn't matter.
talk) (@PrimeHunter:, I think I understand now. So there's no way to have an icon become associated with a string with more than one word then? --
talk) (Right.
Gadget-MenuTabsToggle
talk) (Hello, This gadget (see the section name) is not working now because the images no longer available. I fixed it on our project. If you want, please copy ckb:MediaWiki:Gadget-MenuTabsToggle.css content and paste it to MediaWiki:Gadget-MenuTabsToggle.css. Thanks! ⇒
talk) (They were just moved again, so is there really a need to rename them all from commons? I guess the svgs are MIA. But does this gadget even still work? ~ Amory
"Strike out usernames that have been blocked" option
talk) (If you have the "Strike out usernames that have been blocked" option enabled in the appearance section of your preferences, it strikes out... well, blocked users. HOWEVER, it doesn't strike out globally locked users.
See Category:Wikipedia sockpuppets of Northernrailwaysfan for example, where User:Mr Fenton's Helicopters and User:Q3 Academy Tipton aren't striked out.
Those should also be striken. I'm posting here because I don't know where else this should be posted.
If you have the "Strike out usernames that have been blocked" option enabled in the appearance section of your preferences, it strikes out... well, blocked users. HOWEVER, it doesn't strike out globally locked users.
See Category:Wikipedia sockpuppets of Northernrailwaysfan for example, where User:Mr Fenton's Helicopters and User:Q3 Academy Tipton aren't striked out.
Those should also be striken. I'm posting here because I don't know where else this should be posted.
Help Needed (read, begged for): MediaWiki JS Interface
talk) (Template:Resolved
Ironically, I've been using AJAX to make synchronous API calls. I'd really like to swap over to using MediaWiki's JavaScript API interface, but I don't know how to make synchronous calls to it.
var data;
new mw.Api().get({
"action": "query",
"list": "backlinks",
"bltitle": "Wikipedia:Village pump (technical)"
}).done(result => { data = result; });
console.log(data);
Of course, 'undefined' is logged to the console since get()
or done()
do not wait for the request to be 'completed'. Is it possible to make synchronous calls to the API via this method? I am aware that I could write }).done(result => { console.log(result); });
, but let us ignore that for argument's sake. Regards,
Template:Resolved Ironically, I've been using AJAX to make synchronous API calls. I'd really like to swap over to using MediaWiki's JavaScript API interface, but I don't know how to make synchronous calls to it.
var data;
new mw.Api().get({
"action": "query",
"list": "backlinks",
"bltitle": "Wikipedia:Village pump (technical)"
}).done(result => { data = result; });
console.log(data);
Of course, 'undefined' is logged to the console since get()
or done()
do not wait for the request to be 'completed'. Is it possible to make synchronous calls to the API via this method? I am aware that I could write }).done(result => { console.log(result); });
, but let us ignore that for argument's sake. Regards,
talk) (@Guywan: mw.Api does not support synchronous API calls. Rightfully so, since synchronous calls block the JavaScript runtime's event loop which causes the browser to freeze until the API response is received.
However, you can use the ES6 async/await functions to make your asynchronous code look synchronous. But note that ES6+ stuff is not supported by older browsers (IE 11, etc) and are thus disallowed on site-wide javascript.
async function main() {
var data = await new mw.Api().get({
"action": "query",
"list": "backlinks",
"bltitle": "Wikipedia:Village pump (technical)"
});
console.log(data)
}
main();
@Guywan: mw.Api does not support synchronous API calls. Rightfully so, since synchronous calls block the JavaScript runtime's event loop which causes the browser to freeze until the API response is received.
However, you can use the ES6 async/await functions to make your asynchronous code look synchronous. But note that ES6+ stuff is not supported by older browsers (IE 11, etc) and are thus disallowed on site-wide javascript.
async function main() {
var data = await new mw.Api().get({
"action": "query",
"list": "backlinks",
"bltitle": "Wikipedia:Village pump (technical)"
});
console.log(data)
}
main();
Edits get lost by the Back button
talk) (When I start editing an article, visit another web page in the same tab, then use the Back button to return to my edits, my edits are gone -- I see the previous state of the page. This behavior is new. Until a week or so ago, I would routinely visit another page, then use Back to return to my incomplete edits. I really liked that functionality.
This appears to be a change in Wikipedia (different HTTP cache-control header, perhaps?), not the browser (I use Chrome 80.0.3987.149 on Windows 10). Here's a quick test I tried. I created the following HTML page:
<!DOCTYPE html>
<html>
<a href="https://en.wikipedia.org/wiki/Main_Page">Go to WP</a>
<form>
<textarea> </textarea>
</form>
</html>
navigated to it, entered something in the form, clicked on Go to WP, then hit Back, and the textarea content was still there.
But if I do the same thing with, say, my Sandbox page, the textbox content is reset. Has something changed recently? Thanks, --
When I start editing an article, visit another web page in the same tab, then use the Back button to return to my edits, my edits are gone -- I see the previous state of the page. This behavior is new. Until a week or so ago, I would routinely visit another page, then use Back to return to my incomplete edits. I really liked that functionality.
This appears to be a change in Wikipedia (different HTTP cache-control header, perhaps?), not the browser (I use Chrome 80.0.3987.149 on Windows 10). Here's a quick test I tried. I created the following HTML page:
<!DOCTYPE html>
<html>
<a href="https://en.wikipedia.org/wiki/Main_Page">Go to WP</a>
<form>
<textarea> </textarea>
</form>
</html>
navigated to it, entered something in the form, clicked on Go to WP, then hit Back, and the textarea content was still there. But if I do the same thing with, say, my Sandbox page, the textbox content is reset. Has something changed recently? Thanks, --
talk) (I'm also seeing this and yeah, it is damned annoying to lose edits because chrome changed or because mediawiki changed; I don't know which. On win 10 and chrome, it's Template:Key press-click for a new tab that gets the focus. Still, habits are very hard to break so there is the 'Warn me when I leave an edit page with unsaved changes' at Special:Preferences#mw-prefsection-editing.
—
I'm also seeing this and yeah, it is damned annoying to lose edits because chrome changed or because mediawiki changed; I don't know which. On win 10 and chrome, it's Template:Key press-click for a new tab that gets the focus. Still, habits are very hard to break so there is the 'Warn me when I leave an edit page with unsaved changes' at Special:Preferences#mw-prefsection-editing.
—
Remind me bot
talk) (I'd like to propose that a bot be used to allow users to schedule reminders for themselves. The basic spec is:
- User adds an entry to /reminders.json
- User adds user page to a specific category (not created yet)
- Each day, but retrieves the user pages in that category, transforms them to the reminder pages ("User:Example" => "User:Example/reminders.json"), retrieves the scheduled reminders, and posts any for the day
- User is responsible for removing old reminders from their /reminders.json page, but the bot only checks to see if there are reminders for "today" (UTC at noon) and so doesn't care if there are old ones left
See Wikipedia:Bots/Requests for approval/DannyS712 bot 68 for more. --
I'd like to propose that a bot be used to allow users to schedule reminders for themselves. The basic spec is:
- User adds an entry to /reminders.json
- User adds user page to a specific category (not created yet)
- Each day, but retrieves the user pages in that category, transforms them to the reminder pages ("User:Example" => "User:Example/reminders.json"), retrieves the scheduled reminders, and posts any for the day
- User is responsible for removing old reminders from their /reminders.json page, but the bot only checks to see if there are reminders for "today" (UTC at noon) and so doesn't care if there are old ones left
See Wikipedia:Bots/Requests for approval/DannyS712 bot 68 for more. --
talk) (Why not just make a gadget that stores things in the user's private JSON store (userjs-foo
in the options API), and checks that every day? --
userjs-foo
in the options API), and checks that every day? --Exclude Templates in Search
talk) (Is there some way to search article text, but exclude navbox templates? For example, if I want to search Wikipedia for rumaki, I typically don't want to find all the pages which include Template:Bacon. One trick is to find an obscure term that appears in the template and search for, e.g., [rumaki -samgyeopsal]; but that isn't right, either, because it excludes pages that mention rumaki in the text of the page as well as in the Bacon template. Thanks, --
talk) (Template:PbTemplate:Ec I just had to do this recently, and it was a bit easier, because the page had a disambiguator which could be searched for as well since it would only appear in links and not normal text. It's not a perfect solution, but you can try a regexp by searching insource:/\[\[[Rr]umaki/
, which should match anything that links to that page, but it won't catch anything that links to a redirect to it, which may or may not be a problem. If there's a better way, I'd certainly like to know. –
insource:/\[\[[Rr]umaki/
, which should match anything that links to that page, but it won't catch anything that links to a redirect to it, which may or may not be a problem. If there's a better way, I'd certainly like to know. –talk) (insource regex is expensive by itself and may time out. mw:Help:CirrusSearch#Insource says: "When possible, please avoid running a bare regexp search". If the goal was to find links (I doubt it was although a link was posted), you can search insource:rumaki insource:/\[\[[Rr]umaki/
. My script does something similar.
insource:rumaki insource:/\[\[[Rr]umaki/
. My script does something similar. Link rot: pqarchiver.com is gone.
Example 2
talk) (Edited at Special:Diff/947089613
This link is dead:
Chicago Tribune hosts it online at: (I used this link.)
Here, the live copy on chicagotribune.com seems best, but it had to be searched-for.
If the URL at chicagotribune.com goes away, Internet Archive of chicagotribune.com is a fallback.
The "automatable" fix would be pqasb.pqarchiver.com (if found). An archive of an archive seems ugly, but it is better than a broken link. A bot might or might not be able to find a "live" link, depending on the original publisher. If someone wants to manually search out the "live copy" afterward, the archive of the archive can make it easier. -
Edited at Special:Diff/947089613
This link is dead:
Chicago Tribune hosts it online at: (I used this link.)
Here, the live copy on chicagotribune.com seems best, but it had to be searched-for.
If the URL at chicagotribune.com goes away, Internet Archive of chicagotribune.com is a fallback.
The "automatable" fix would be pqasb.pqarchiver.com (if found). An archive of an archive seems ugly, but it is better than a broken link. A bot might or might not be able to find a "live" link, depending on the original publisher. If someone wants to manually search out the "live copy" afterward, the archive of the archive can make it easier. -
Example 2
talk) (Edited at Special:Diff/947089613
This link is dead:
Chicago Tribune hosts it online at: (I used this link.)
Here, the live copy on chicagotribune.com seems best, but it had to be searched-for.
If the URL at chicagotribune.com goes away, Internet Archive of chicagotribune.com is a fallback.
The "automatable" fix would be pqasb.pqarchiver.com (if found). An archive of an archive seems ugly, but it is better than a broken link. A bot might or might not be able to find a "live" link, depending on the original publisher. If someone wants to manually search out the "live copy" afterward, the archive of the archive can make it easier. -
Edited at Special:Diff/947089613
This link is dead:
Chicago Tribune hosts it online at: (I used this link.)
Here, the live copy on chicagotribune.com seems best, but it had to be searched-for.
If the URL at chicagotribune.com goes away, Internet Archive of chicagotribune.com is a fallback.
The "automatable" fix would be pqasb.pqarchiver.com (if found). An archive of an archive seems ugly, but it is better than a broken link. A bot might or might not be able to find a "live" link, depending on the original publisher. If someone wants to manually search out the "live copy" afterward, the archive of the archive can make it easier. -
Deepcat searches not returning all results
talk) (Doing a deepcat search like deepcat:"Lakhimpur Kheri district" finds only 76 results, though there ought to be more. Pages like Lakhahi (a member of Category:Villages in Lakhimpur Kheri district which is in turn a member of Category:Lakhimpur Kheri district) don't show up in the results.
mw:Help:CirrusSearch#Deepcategory doesn't mention anything that would be responsible for such behaviour. Is this a bug?
Doing a deepcat search like deepcat:"Lakhimpur Kheri district" finds only 76 results, though there ought to be more. Pages like Lakhahi (a member of Category:Villages in Lakhimpur Kheri district which is in turn a member of Category:Lakhimpur Kheri district) don't show up in the results.
mw:Help:CirrusSearch#Deepcategory doesn't mention anything that would be responsible for such behaviour. Is this a bug?
Edit screen issue
talk) (
When I try to edit The Pleiades (volcano group), the edit screen works abnormally and cannot be edited properly. It's not the first time that this problem occurs, it seems to hit separate articles at different tims.
When I try to edit The Pleiades (volcano group), the edit screen works abnormally and cannot be edited properly. It's not the first time that this problem occurs, it seems to hit separate articles at different tims.
talk) (Screwing around with the latest addition to my local CSS and clearing caches didn't resolve the problem FYI.
talk) (It looks like it tries to make two edit areas. Is that normal for you? I haven't seen it before. Does safemode work? Does it work to log out? Try to disable syntax highlighting on the highlighter marker button
to the left of "Advanced". Did you clear cache with only F5, with Ctrl+F5, or clear the entire browser cache? See Wikipedia:Bypass your cache.

talk) (Safemode works, but I don't understand how the problem occurs. Is there a problem with the code somewhere?
talk) (I've seen something like this before. You've got Editor / "Enable the editing toolbar" checked under Special:Preferences, yes? I think it's related to that. I used to use that, but found it didn't work reliably and turned it off. Try unchecking it and see if that helps? --
talk) (Well, it helps at removing the citation tools, which is not much better than this bug. So that doesn't work as a solution.
I have seen this before, though not recently. I use chrome if that matters. For me, if I recall correctly, it appears to happen when the last character added to the edit window (bottom right corner) is a space character that fills or is the first to overflow edit window. Adding that character causes the browser to enable the elevators in the scrollbars. But, it has been a while since I've seen this so it is highly possible that I am mis-remembering the details. When I did encounter these problems, inserting a crlf (enter key) restored the display. After I finished writing whatever it was that I was writing, go back and delete the crlf and bob's-yer-uncle.
The thing I'm seeing most often with this highlighter is mis-registering of the highlighting by a few characters, which, once the highlighting is offset from where it is supposed to be, carries on to the end of the wikisource.
—
talk) (For what it's worth I am going with Firefox. I see from the page history that other editors were able to edit the page in the meantime.
talk) (@Jo-Jo Eumerus: I seem to recall this issue happening only for User:Remember the dot/Syntax highlighter ("Syntax highlighter" at Special:Preferences#mw-prefsection-gadgets). Do you have that enabled? In your screenshot you don't appear to have CodeMirror turned on (via the
button), so I would try that instead of the gadget to see if it works better.

talk) (A little update: It seems like not all versions of the history have the bug at the same time. Maybe it's something with Firefox? I did manage to complete that article with RoySmith's suggstion, but it won't work for a large article or a multi-page source if the bug appears there as well.
I did manage to complete that article with RoySmith's suggstion, but it won't work for a large article or a multi-page source if the bug appears there as well.
Excessive use of style attributes makes custom theme changes (e.g. dark theme) difficult-to-impossible
talk) (Is there a reason why style="..."
is used everywhere, instead of using CSS classes or IDs? (For example, on this page, there are hundreds of occurences, including color changes.) Doing so makes making large theme customizations, especially color changes such as dark-theming, for which color contrast is often an issue, extremely difficult and error-prone. — Preceding unsigned comment added by
style="..."
is used everywhere, instead of using CSS classes or IDs? (For example, on this page, there are hundreds of occurences, including color changes.) Doing so makes making large theme customizations, especially color changes such as dark-theming, for which color contrast is often an issue, extremely difficult and error-prone. — Preceding unsigned comment added by New traffic report: Daily article pageviews from social media
talk) (The WMF Research team has published a new report of inbound traffic coming from Facebook, Twitter, YouTube, and Reddit.
The report contains a list of all articles that received at least 500 views from one or more of these sites (i.e. someone clicked a link on Twitter that sent them directly to a Wikipedia article). The report will be updated daily at around 14:00 UTC with traffic counts from the previous calendar day.
We believe this report provides editors with a valuable new information source. Daily inbound social media traffic stats can help editors monitor edits to articles that are going viral on social media sites and/or are being linked to by the social media platform itself in order to fact-check disinformation and other controversial content.
The social media traffic report also contains additional public article metadata that may be useful in the context of monitoring articles that are receiving unexpected attention from social media sites, such as...
- the total number of pageviews (from all sources) that article received in the same period of time
- the number of pageviews the article received from the same platform (e.g. Facebook) the previous day (two days ago)
- The number of editors who have the page on their watchlist
- The number of editors who have watchlisted the page AND recently visited it
We are currently actively seeking feedback on this report! We have some ideas of our own for how to improve the report, but we want to hear yours. If you have feature suggestions, questions, or other comments please add them to the project talkpage on Meta or ping Jonathan Morgan on his talkpage. Also be sure to check out our growing FAQ.
We intend to maintain this daily report for at least the next two months. If we receive feedback that the report is useful, we are considering making it available indefinitely. Cheers,
The WMF Research team has published a new report of inbound traffic coming from Facebook, Twitter, YouTube, and Reddit.
The report contains a list of all articles that received at least 500 views from one or more of these sites (i.e. someone clicked a link on Twitter that sent them directly to a Wikipedia article). The report will be updated daily at around 14:00 UTC with traffic counts from the previous calendar day.
We believe this report provides editors with a valuable new information source. Daily inbound social media traffic stats can help editors monitor edits to articles that are going viral on social media sites and/or are being linked to by the social media platform itself in order to fact-check disinformation and other controversial content.
The social media traffic report also contains additional public article metadata that may be useful in the context of monitoring articles that are receiving unexpected attention from social media sites, such as...
- the total number of pageviews (from all sources) that article received in the same period of time
- the number of pageviews the article received from the same platform (e.g. Facebook) the previous day (two days ago)
- The number of editors who have the page on their watchlist
- The number of editors who have watchlisted the page AND recently visited it
We are currently actively seeking feedback on this report! We have some ideas of our own for how to improve the report, but we want to hear yours. If you have feature suggestions, questions, or other comments please add them to the project talkpage on Meta or ping Jonathan Morgan on his talkpage. Also be sure to check out our growing FAQ.
We intend to maintain this daily report for at least the next two months. If we receive feedback that the report is useful, we are considering making it available indefinitely. Cheers,
Template help
talk) (I'm trying to make a template here User:FlightTime/PL display, however I can not get the two items to be on the same line. Any help will be gratefully appreciated. Thanx, -
talk) (You won't if you use Template:Tlx, which emits a table; and tables cannot be inline. --
talk) (Well, there is style="display:inline-table;"
but something in the ombox
class prevents it from working here, and we probably shouldn't try to make it work.
style="display:inline-table;"
but something in the ombox
class prevents it from working here, and we probably shouldn't try to make it work. en.wikipedia.org wants to use your device's location???
talk) (I got a "en.wikipedia.org wants to use your device's location" alert the other day when browsing on my phone. Any idea why wikipedia would request my location? --
Hyphenation on mobile view
talk) (Template:Tracked
Has there been a change to the configuration of the mobile site? Starting in the last 24 hours, I'm seeing very over-zealous hyphenation being automatically inserted in mobile views. Taking the current Main Page as an example, 30% of the TFA lines and 50% of the ITN lines of text end with an auto-generated hyphen, but look fine on desktop. Turning my phone to landscape view roughly halves those percentages, and moves the hyphens to different places, but it's still an off-puttingly high number of hyphens that makes the text difficult to read. They're also being inserted in very odd locations within the word e.g. im-agery and mu-sic.
talk) (* {hyphens: manual !important;}
- Or is MediaWiki:Minerva.css only used by "MinervaNeue" at Special:Preferences#mw-prefsection-rendering while the mobile site uses MediaWiki:Mobile.css? Special:Mypage/minerva.css is loaded in mobile in either case so the fix works.
* {hyphens: manual !important;}
- Or is MediaWiki:Minerva.css only used by "MinervaNeue" at Special:Preferences#mw-prefsection-rendering while the mobile site uses MediaWiki:Mobile.css? Special:Mypage/minerva.css is loaded in mobile in either case so the fix works.
talk) (I am using Safari on iPhone, so that's probably the same issue. I only log in on desktop, remaining logged out on mobile, so cannot make manual CSS changes there. And if there's a problem, it should probably be fixed for more than just the user who reports it...
talk) (Again it's unclear if there was a deliberate change, but it's looking a lot better today. TFA has just two auto-hyphens, and ITN three (portrait mode, two and two on landscape), which is far more manageable. This level is probably what was originally intended.
talk) (@Modest Genius:, the breaking is completely automatic as determined by an algorithm in the browser. —
talk) (Sure, but that doesn't explain why it went haywire for a few days, on Wikipedia only, affecting multiple users. There was no Safari update in that time either.
Articles nearly unreadable
talk) (In the past couple of days, it appears that changes have been made to Wikipedia that have made articles almost unreadable on the mobile site. Words are now allowed to be broken up across lines, connected by a hyphen (I forget the name for this). This is perhaps most noticeable in tables, such as Template:Episode table. One of the column headers in this table is "Directed by", but with the recent changes, it now appears like this:
- Dir-
- ec-
- ted
- by
There are countless examples of this, and changing the orientation of the screen does nothing to fix it.
In the past couple of days, it appears that changes have been made to Wikipedia that have made articles almost unreadable on the mobile site. Words are now allowed to be broken up across lines, connected by a hyphen (I forget the name for this). This is perhaps most noticeable in tables, such as Template:Episode table. One of the column headers in this table is "Directed by", but with the recent changes, it now appears like this:
- Dir-
- ec-
- ted
- by
There are countless examples of this, and changing the orientation of the screen does nothing to fix it.
Articles nearly unreadable
talk) (In the past couple of days, it appears that changes have been made to Wikipedia that have made articles almost unreadable on the mobile site. Words are now allowed to be broken up across lines, connected by a hyphen (I forget the name for this). This is perhaps most noticeable in tables, such as Template:Episode table. One of the column headers in this table is "Directed by", but with the recent changes, it now appears like this:
- Dir-
- ec-
- ted
- by
There are countless examples of this, and changing the orientation of the screen does nothing to fix it.
In the past couple of days, it appears that changes have been made to Wikipedia that have made articles almost unreadable on the mobile site. Words are now allowed to be broken up across lines, connected by a hyphen (I forget the name for this). This is perhaps most noticeable in tables, such as Template:Episode table. One of the column headers in this table is "Directed by", but with the recent changes, it now appears like this:
- Dir-
- ec-
- ted
- by
There are countless examples of this, and changing the orientation of the screen does nothing to fix it.
Miscategorization into Category:Templates with short description
talk) (Template:Resolved
For some reason, Template:Tlp causes the page Template talk:Dosanddonts to be added to Category:Templates with short description. I figured it out by commenting out parts of the page. However, Template:Parameter names example, Module:Parameter names example, and Template:Dosanddonts (passed as parameter into {{Parameter names example}}) don't seem to reference Category:Templates with short description in any way. Does anyone have an idea of why such miscategorization could happen? —
talk) ({{Information page}} has an includeonly block at the top with short description code in it. It may be a good idea to wrap that code in a namespace-limiting selector template. See {{Namespace and pagename-detecting templates}} for options. –
Template Infobox Pandemic - aka Template:Infobox outbreak - is NOT working
talk) (Ok. The Infobox Pandemic template has something - something very wrong with it. The website linkage isn't working. All it says on every single article I have checked is Official Website. BUT the underlying URL isn't presenting. I don't want to try to tinker with it and break the myriad coronavirus-pandemic articles somehow. Help!
talk) (Fixed by [6] at Wikipedia:Help desk#website variable - Infobox pandemic. If you still see problems then purge the page. If that doesn't work then post a link to the article.
Template for identifying autoconfirmed users?
talk) (I'm looking to customize {{Afc talk}} so that, rather than saying "If your account is [autoconfirmed] you can create articles yourself", it gives advice specific to the user's situation. Is there a template similar to {{IsIPAddress}} that identifies autoconfirmed users?
talk) (Unfortunately, it is not possible for any template or module to check whether a user is autoconfirmed or not.
talk) (However, CSS can target groups. An example set is MediaWiki:Group-sysop.css. --
talk) (@Izno: Hmm, is that something I'd be able to use for what I'm trying to do? I'm not that familiar with CSS.
talk) (@Sdkb: Probably could depending on the tradeoff you have to make. Basically you'd put the text of the autocofirmed user and the non-autoconfirmed and then wrap them in the appropriate class, without any if statements or similar. The source wikitext has more "junk" in it but to a new user they should only see the content they care about on the page. --
talk) (@Izno: A little extra code sounds like a worthwhile tradeoff. (I'm assuming it would display based on the status of the viewer, not status of the user whose talk page it's on, so it'd display differently for AfC reviewers than reviewees. That's a bigger downside to me, but I'd still like to try implementing.) I found MediaWiki:Group-autoconfirmed.css, which is hopefully right, but I'm not sure where the group is for all non-autoconfirmed editors; is there a class for that or a way to code it?
talk) (- There is no group for non-autoconfirmed editors but you can use the definition of :
unconfirmed-show
in MediaWiki:Group-autoconfirmed.css:
<div class="autoconfirmed-show">Only autoconfirmed viewers see this.</div><div class="unconfirmed-show">Only non-autoconfirmed viewers see this.</div>
- This code produces:
- Only autoconfirmed viewers see this.Only non-autoconfirmed viewers see this.
unconfirmed-show
starts out visible to everybody but is then hidden for autoconfirmed users by MediaWiki:Group-autoconfirmed.css. autoconfirmed-show
starts out visible for everybody, is then hidden for everybody by MediaWiki:Common.css, but is then made visible again for autoconfirmed users by an !important
override in MediaWiki:Group-autoconfirmed.css. If the CSS files are not loaded for a user then they see both, e.g. in safemode or in some republishers.
- There is no group for non-autoconfirmed editors but you can use the definition of :
unconfirmed-show
in MediaWiki:Group-autoconfirmed.css:
<div class="autoconfirmed-show">Only autoconfirmed viewers see this.</div><div class="unconfirmed-show">Only non-autoconfirmed viewers see this.</div>
- This code produces:
- Only autoconfirmed viewers see this.Only non-autoconfirmed viewers see this.
unconfirmed-show
starts out visible to everybody but is then hidden for autoconfirmed users by MediaWiki:Group-autoconfirmed.css.autoconfirmed-show
starts out visible for everybody, is then hidden for everybody by MediaWiki:Common.css, but is then made visible again for autoconfirmed users by an!important
override in MediaWiki:Group-autoconfirmed.css. If the CSS files are not loaded for a user then they see both, e.g. in safemode or in some republishers.
Links to WP:SPI archives don't appear
talk) (WP:Sockpuppet investigations/Osatmusic was archived yesterday, but there's no link to the archive page. I noticed something like this a while ago, and was told it was just a cache problem. In that case, when I looked again, sure enough the link was there, so I assumed that was the case. But, this is 14 hours ago, which seems like more than long enough for any page cache to time out. Even odder, what links here doesn't show the archive page either. I only get:
User talk:Osatmusic (links | edit)
User:Osatmusic (links | edit)
User:Krealkayln (links | edit)
User talk:Krealkayln (links | edit)
Category:Wikipedia sockpuppets of Osatmusic (links | edit)
and if I directly query the pagelinks table, sure enough, that's all that's in the table. Any idea what's going on here? --
WP:Sockpuppet investigations/Osatmusic was archived yesterday, but there's no link to the archive page. I noticed something like this a while ago, and was told it was just a cache problem. In that case, when I looked again, sure enough the link was there, so I assumed that was the case. But, this is 14 hours ago, which seems like more than long enough for any page cache to time out. Even odder, what links here doesn't show the archive page either. I only get:
User talk:Osatmusic (links | edit) User:Osatmusic (links | edit) User:Krealkayln (links | edit) User talk:Krealkayln (links | edit) Category:Wikipedia sockpuppets of Osatmusic (links | edit)
and if I directly query the pagelinks table, sure enough, that's all that's in the table. Any idea what's going on here? --
talk) (Template:Worksforme @RoySmith: check now, you may need to clear your cache. —
talk) (@Xaosflux:, Yes, it's working for me now too. So, this is essentially the same experience I had the last time. Somebody else tries it, it works for them, and then it works for me too. That smells like some kind of cache invalidation problem inside the server stack, not my browser.
The other thing that's confusing is why the archive doesn't show up in the What Links Here listing. Looking at some other SPI pages, they're all like that. I guess the "< Wikipedia:Sockpuppet investigations | Karmaisking" line at the top of the archive is generated in some way that doesn't create an entry in the pagelinks table? --
@Xaosflux:, Yes, it's working for me now too. So, this is essentially the same experience I had the last time. Somebody else tries it, it works for them, and then it works for me too. That smells like some kind of cache invalidation problem inside the server stack, not my browser.
The other thing that's confusing is why the archive doesn't show up in the What Links Here listing. Looking at some other SPI pages, they're all like that. I guess the "< Wikipedia:Sockpuppet investigations | Karmaisking" line at the top of the archive is generated in some way that doesn't create an entry in the pagelinks table? --
talk) (
Template include size limit
talk) (2019–20 coronavirus pandemic and 2019–20 coronavirus pandemic in Mainland China are still each exceeding the template include size limit, which is generally not a good thing because it produces reader-facing errors and we aren't supposed to like those. I've tried to reduce the template size of the former article (mainly from removing inline CSS), but I've only gotten about one quarter of the way there.
The main ways I could see the situation improving:
- Replacement of {{Flagdeco}}, which would result in a reduction of about 300KB. The template is impressively inefficient and has multiple nested layers. I think it could be appropriate to just substitute all the instances, but it would probably be worthwhile to improve the template as well.
- Replacement of {{Medical cases chart/Row}}, which suffers from the same sort of issue; its improvement would result in a similar reduction. It's based on {{Bar stacked}}, but I think it would be worth removing the dependency if it would improve the efficiency of the template.
- Reduction of the navbox sizes, specifically {{2019–20 coronavirus pandemic}}. The navboxes at the bottom of the former article take up 500KB. This navbox in particular seems to be excessively large and would probably benefit from being split into several smaller navboxes. Removing the links to the data templates alone would result in a reduction of 130KB.
None of these seem like low-hanging fruit, since both templates are fairly convoluted and the navbox split might take a while and/or be difficult to organize, so maybe there are still other things that could be reduced first.
2019–20 coronavirus pandemic and 2019–20 coronavirus pandemic in Mainland China are still each exceeding the template include size limit, which is generally not a good thing because it produces reader-facing errors and we aren't supposed to like those. I've tried to reduce the template size of the former article (mainly from removing inline CSS), but I've only gotten about one quarter of the way there.
The main ways I could see the situation improving:
- Replacement of {{Flagdeco}}, which would result in a reduction of about 300KB. The template is impressively inefficient and has multiple nested layers. I think it could be appropriate to just substitute all the instances, but it would probably be worthwhile to improve the template as well.
- Replacement of {{Medical cases chart/Row}}, which suffers from the same sort of issue; its improvement would result in a similar reduction. It's based on {{Bar stacked}}, but I think it would be worth removing the dependency if it would improve the efficiency of the template.
- Reduction of the navbox sizes, specifically {{2019–20 coronavirus pandemic}}. The navboxes at the bottom of the former article take up 500KB. This navbox in particular seems to be excessively large and would probably benefit from being split into several smaller navboxes. Removing the links to the data templates alone would result in a reduction of 130KB.
None of these seem like low-hanging fruit, since both templates are fairly convoluted and the navbox split might take a while and/or be difficult to organize, so maybe there are still other things that could be reduced first.
talk) (I checked out substing the Flagdeco template; it needs to be substed 3 times, and some useless "if" statements cut to end up with a simpler file and size spec. It looks to be achievable if we do it in a sandbox end then replace the flags. There has been consensus to keep the flag images. I am prepared to this over a period of time if there is agreement here.
talk) (There are 224 of flagdeco in the three sidepanel templates. I could readily replace them with much more efficient code that I would put in a module, however, previewing the 224 templates in a sandbox requires only "CPU time usage: 0.824 seconds" and not much else so I don't think working on them would help much. I'll look at the other options. I have fixed several large articles by replacing key parts with cut-down equivalents in a module and something will be needed here as the article requires "CPU time usage: 9.328 seconds" which makes edits very hard apart from the technical limit.
talk) (The current problem is that 2019–20 coronavirus pandemic has hit this limit:
- Post‐expand include size: 2097139/2097152 bytes
1.2 MB of that comes from the templates in the following table.
Template
Bytes
Percent
{{COVID-19 testing}}
215,716
17.8
{{2020 coronavirus quarantines outside Hubei}}
120,753
10.0
{{2019–20 coronavirus pandemic data}}
423,354
35.0
{{2019–20 coronavirus pandemic}}
285,450
23.6
{{PHEIC}}
8,588
0.7
{{Health in China}}
17,649
1.5
{{Respiratory pathology}}
64,663
5.3
{{Viral systemic diseases}}
39,892
3.3
{{Pneumonia}}
7,485
0.6
{{Epidemics}}
25,151
2.1
Total
1,208,701
100.0
To fix the post‐expand include size problem, the only options are to remove some of the above from the article, or to include the wikitext of some of the above templates in the article. Those options are ugly and the second option would not save much. Commenting out {{2019–20 coronavirus pandemic data}} and previewing the page gives 2,002,635 bytes with all the other templates expanded.
The current problem is that 2019–20 coronavirus pandemic has hit this limit:
- Post‐expand include size: 2097139/2097152 bytes
1.2 MB of that comes from the templates in the following table.
Template | Bytes | Percent |
---|---|---|
{{COVID-19 testing}} | 215,716 | 17.8 |
{{2020 coronavirus quarantines outside Hubei}} | 120,753 | 10.0 |
{{2019–20 coronavirus pandemic data}} | 423,354 | 35.0 |
{{2019–20 coronavirus pandemic}} | 285,450 | 23.6 |
{{PHEIC}} | 8,588 | 0.7 |
{{Health in China}} | 17,649 | 1.5 |
{{Respiratory pathology}} | 64,663 | 5.3 |
{{Viral systemic diseases}} | 39,892 | 3.3 |
{{Pneumonia}} | 7,485 | 0.6 |
{{Epidemics}} | 25,151 | 2.1 |
Total | 1,208,701 | 100.0 |
To fix the post‐expand include size problem, the only options are to remove some of the above from the article, or to include the wikitext of some of the above templates in the article. Those options are ugly and the second option would not save much. Commenting out {{2019–20 coronavirus pandemic data}} and previewing the page gives 2,002,635 bytes with all the other templates expanded.
talk) (You could even cut the navboxes' template size by close to a quarter just by invoking the modules directly from the navbox templates instead of using {{Navbox}}.
talk) (@Jc86035: I replaced all 178 occurrences of {{flagdeco|...}}
in {{2019–20 coronavirus pandemic data}} with the fully expanded output of one of the flags, then previewed the result at 2019–20 coronavirus pandemic. There was no significant change to the post‐expand include size in the result although it did manage to expand the first two navboxes. By "only options", I was thinking that things like halving the template expansion of the navboxes by directly calling the navbox module would not give enough benefit.
{{flagdeco|...}}
in {{2019–20 coronavirus pandemic data}} with the fully expanded output of one of the flags, then previewed the result at 2019–20 coronavirus pandemic. There was no significant change to the post‐expand include size in the result although it did manage to expand the first two navboxes. By "only options", I was thinking that things like halving the template expansion of the navboxes by directly calling the navbox module would not give enough benefit. talk) (@Johnuniq: If you previewed the result in the article, because it would be exceeding the limit both before and after the change, it would just display some number close to the maximum each time (if I'm understanding the situation correctly). Testing by previewing the template without the documentation and then doubling the number is probably more accurate.
talk) (Yes, the expansion size was near the maximum but, as mentioned, it managed to expand the first two of the navboxes ({{2019–20 coronavirus pandemic}} and {{PHEIC}}). Replacing every Template:Tlf with its expansion would save the space from those two navboxes, namely 285,450 + 8,588 = 294,038 bytes. However, a bit more saving is needed to include all the navboxes, and the article is going to keep expanding.
talk) (The following pages are in Category:Pages where template include size is exceeded:
- 2019–20 coronavirus pandemic
- 2019–20 coronavirus pandemic in mainland China
- 2020 coronavirus pandemic in the Philippines
- Timeline of the 2019–20 coronavirus pandemic in February 2020
- Timeline of the 2019–20 coronavirus pandemic in March 2020
- Timeline of the 2019–20 coronavirus pandemic in November 2019 – January 2020
- Template:2019–20 coronavirus pandemic data/International medical cases
- Template:Medical cases chart
The last page is pretty mysterious.
The following pages are in Category:Pages where template include size is exceeded:
- 2019–20 coronavirus pandemic
- 2019–20 coronavirus pandemic in mainland China
- 2020 coronavirus pandemic in the Philippines
- Timeline of the 2019–20 coronavirus pandemic in February 2020
- Timeline of the 2019–20 coronavirus pandemic in March 2020
- Timeline of the 2019–20 coronavirus pandemic in November 2019 – January 2020
- Template:2019–20 coronavirus pandemic data/International medical cases
- Template:Medical cases chart
The last page is pretty mysterious.
talk) (@Johnuniq: It's probably because of the size of the documentation, which includes four real examples.
talk) (Interestingly enough, a sandbox version shows the bottom templates normally. So the issue seems to occur only in article namespace.
talk) (@Brandmeister: I replaced the {{flagdeco}} uses in two of the templates a few hours ago, which appears to have resulted in the issue being resolved (for now) on 2019–20 coronavirus pandemic. There may have been other contributing changes, such as the change to {{flagdeco}} from {{flagicon}} in {{COVID-19 testing}}, the change to use the navbox modules directly in {{2019–20 coronavirus pandemic}}, and changes to article content. Nevertheless, the page is still just 6% under the template limit, and 2019–20 coronavirus pandemic in mainland China still exceeds the limit, so I think it would be worth it to make further optimizations.
Category view in watchlist
talk) (I have the option to show additions/removals to categories turned on (or better, the inverse not turned off). That gives now a line in my watchlist:
- [revid] [user] ([talk] [contribs] [block]) [article] added to category
I, severely, miss there a diff of the edit that caused that change in the category. I now either have to click 'contribs', and find the edit of the editor, or click 'article', load it's history, and there find the edit. If one editor made multiple edits to the same article, the actual edit that caused the category change is quite a job to find. For maintenance categories, you sometimes want to be able to revert the edit that caused the categorisation change, not having to do that manual. Are there any options to add a 'diff' link to these lines in the watchlist? --
I have the option to show additions/removals to categories turned on (or better, the inverse not turned off). That gives now a line in my watchlist:
- [revid] [user] ([talk] [contribs] [block]) [article] added to category
I, severely, miss there a diff of the edit that caused that change in the category. I now either have to click 'contribs', and find the edit of the editor, or click 'article', load it's history, and there find the edit. If one editor made multiple edits to the same article, the actual edit that caused the category change is quite a job to find. For maintenance categories, you sometimes want to be able to revert the edit that caused the categorisation change, not having to do that manual. Are there any options to add a 'diff' link to these lines in the watchlist? --
talk) (Click the revid to see the revision. Then click "diff" to the left of "Previous revision" at the top to see the diff for the revision (there is no diff link if it's the page creation). This works for all revision links. There is also a diff link to the next and current revision. You appear to have enabled "Group changes by page in recent changes and watchlist" at Special:Preferences#mw-prefsection-rc. It's worse if it's disabled. Then you get unlinked "(diff | hist)" (this is phab:T148533) and have to do the contributions or history search you mention.
One suggestion for the UI
talk) (Hello, i have a suggestion for the wikipedia's UI. When we have a long article needing a task done, it is frustrating to read it upto the end then scroll back up to click on the edit button or publish button. I wanted to suggest that can't we have a scroll up button at the bottom right corner in every wikipedia page? I think this will help a lot.
talk) (@Lightbluerain:, I would suggest you take a look at User:BrandonXLF/ToTopButton. The page has instructions to help you install the user script.
talk) (Your browser or operating system probably also has a keyboard shortcut for "go back to the top of this window". On my computer, for example, it is "command-up arrow". –
talk) (User:BrandonXLF Thanks a lot. It worked.
- Jonesey95, Thanks, but I use wikipedia on mobile.
- Elizium23, read User:BrandonXLF's comment. It worked for me.
- Jonesey95, Thanks, but I use wikipedia on mobile.
- Elizium23, read User:BrandonXLF's comment. It worked for me.
How do I find this?
talk) (I'd like to propose a change to the automated notification new editors receive after creating their account and the one they receive after making their first edit. Is there a page buried deep in MediaWiki or something where I could propose such a change?
talk) (@Sdkb: Depends. Do you talk about email notifications or about in-browser notifications? Do you want to change the default in the MediaWiki software itself, or only for English Wikipedia? To change the in-browser first-edit one, see the "notification-header-thank-you" strings in the English translation of the Echo/Notifications extension at phab:diffusion/ECHO/browse/master/i18n/en.json (and see mw:How to report a bug if you want to propose changes). To change the account creation email on English Wikipedia only, see MediaWiki:Confirmemail_body. (PS: For future reference and as a courtesy to readers, please avoid "this" in summaries but name things, as nobody can know what "this" is about until having read the complete comment - thanks a lot!) --
talk) (We haven't customized any of the messages after the nth edit.[7]
talk) (@PrimeHunter: the proposal is to have the on-wiki welcome notification link to Help:Introduction to Wikipedia instead of WP:Getting started (the intro page that's just a list of every other intro page). I can't find any code linking to WP:GS on the MediaWiki page, but @Clovermoss: looked through her past notifications and found that's where it links currently.
talk) (@Sdkb: I once added the link Allmessages from the API to WP:QQX. It can be used to search all MediaWiki messages when you know a string. This is MediaWiki:Notification-welcome-link. The available pages for a link depends on the wiki so a change should be here and not in MediaWiki itself. The default is empty.
talk) (Thanks so much; you all are wizards! I'll make the proposal sometime soon at VPP for visibility, but glad to have somewhere concrete to point to.
talk) (Will not support a change to an accessibility nightmare set of pages. Not sure you get it yet! --
- Thanks, I'll take a look!
@Moxy: I had a feeling you'd oppose this haha. You've made your views very clear, and please trust that I am reading all your comments/the resources you're linking; I sincerely appreciate the attention you're giving to the proposals, and I think it's made them stronger. On some matters like the buttons we are just going to have to disagree; I've already responded and don't have anything else to add beyond them. Regarding the new issue of accessibility you've started bringing up, my thought is that, by adding some fragment tags to Template:Intro to and then using excerpts, it should be possible to create a readable single-page display of the intro to series that could be linked from the menu. Overall, I'm not sure the technical pump is the best place for us to get into this discussion too deeply; we'll both have a chance to have our say if I get around to making it a proposal.
Dated statements
talk) (In 2019–20 coronavirus pandemic two redlinked maintenance categories Category:Articles containing potentially dated statements from 25 March 2020 and Category:Articles containing potentially dated statements from 26 March 2020 are unhidden for some reason. Should they be created and then hidden manually?
talk) (This was caused by misuse of the {{As of}} template in Template:2019–20 coronavirus pandemic data. I have fixed those instances. It seems like that template could use some error checking. –
talk) (Category:Articles containing potentially dated statements from 26 March 2020 has now reappeared in the article, perhaps someone is messing up the template.
punctuation and piping
talk) (A fellow editor has remarked that the markup
often ejects the comma to a new line on a mobile display, and thus has attempted to circumvent this sort of behaviour by using the piped markup
As I see it, this manner of piping ought not to be necessary and is probably in breach of WP:PIPE. Incidentally, the colleague contacted me because I have a script that removes redundant piping and which has disturbed their workaround. I have not personally observed the line ejects as described, but if such ejects are "normal behaviour" for the MW software, this may be an issue for the wider community. Any comments would be welcome --
A fellow editor has remarked that the markup
often ejects the comma to a new line on a mobile display, and thus has attempted to circumvent this sort of behaviour by using the piped markup
As I see it, this manner of piping ought not to be necessary and is probably in breach of WP:PIPE. Incidentally, the colleague contacted me because I have a script that removes redundant piping and which has disturbed their workaround. I have not personally observed the line ejects as described, but if such ejects are "normal behaviour" for the MW software, this may be an issue for the wider community. Any comments would be welcome --
talk) (Broadly agree with TheDJ, but I would see this as generally inappropriate. --
talk) (Many thanks for the comments, especially those of PrimeHunter re. another possible workaround. To TheDJ, I note that wben using Chrome on Android it's very common to see hanging punctuation marks as I described, therefore I would certainly say this an issue which needs addressing. To Ohconfucius, I don't see anything on WP:PIPE to suggest that the two workarounds now suggested are proscribed. What the guidelines do mention are cases where an entire word or phrase is piped to create an internal link and that link pertains only to part of the text in question. The rationale of prohibition here is clearer, since in theory a reader might be given the impression that they're being redirected to an article on one subject (possibly as yet unwritten) when in fact it concerns quite another. However, I've never encountered this situation in practice and would argue that editors should be able to exercise their own judgement in such instances (with the caveat, perhaps, that WP:PIPE advises the exercise of caution). Punctuation inside piped internal links is a different matter entirely, since it doesn't introduce this kind of ambiguity.
Since, when I see poor typesetting on any web page I immediately begin to question the reliability of its content, my personal opinion is that it's actually rather important to avoid hanging punctuation, aside from the fact it just looks nasty!! I can't immediately think of a way of avoiding this unless the same style is applied to the word immediately preceding a punctuation mark (as far as I'm aware the "hanging-punctuation" property has, to date, only only been implemented by Safari). A workaround would therefore seem the only course of action, though it may not be ideal – in life, the "least worse" solution to most problems involves some degree of compromise
Any further thoughts on this subject would be most welcome.
{
Many thanks for the comments, especially those of PrimeHunter re. another possible workaround. To TheDJ, I note that wben using Chrome on Android it's very common to see hanging punctuation marks as I described, therefore I would certainly say this an issue which needs addressing. To Ohconfucius, I don't see anything on WP:PIPE to suggest that the two workarounds now suggested are proscribed. What the guidelines do mention are cases where an entire word or phrase is piped to create an internal link and that link pertains only to part of the text in question. The rationale of prohibition here is clearer, since in theory a reader might be given the impression that they're being redirected to an article on one subject (possibly as yet unwritten) when in fact it concerns quite another. However, I've never encountered this situation in practice and would argue that editors should be able to exercise their own judgement in such instances (with the caveat, perhaps, that WP:PIPE advises the exercise of caution). Punctuation inside piped internal links is a different matter entirely, since it doesn't introduce this kind of ambiguity.
Since, when I see poor typesetting on any web page I immediately begin to question the reliability of its content, my personal opinion is that it's actually rather important to avoid hanging punctuation, aside from the fact it just looks nasty!! I can't immediately think of a way of avoiding this unless the same style is applied to the word immediately preceding a punctuation mark (as far as I'm aware the "hanging-punctuation" property has, to date, only only been implemented by Safari). A workaround would therefore seem the only course of action, though it may not be ideal – in life, the "least worse" solution to most problems involves some degree of compromise
Any further thoughts on this subject would be most welcome.
{
talk) (@Edwin of Northumbria: please file a bug with Chrome mobile. Doing weird workarounds in our content isn't sustainable long term solution honestly especially when it involves problems that are device specific. "Punctuation inside piped internal links is a different matter entirely, since it doesn't introduce this kind of ambiguity" No, but it does other things, like telling google search and Apple dictionary or screen scraping bots, that the display value of a link contains a comma. Also consider that your personal 'least worst' might be still be someone else's 'most worst'. If we don't complain with browser vendors then the problem will never be fixed and there will be a never ending workload of adding commas to links and then desktop users removing them again. —
[[Primal Scream]],
produces this html in both desktop and mobile: <a href="/wiki/Primal_Scream" title="Primal Scream">Primal Scream</a>,
. Wrapping before the comma looks like a poor browser decision. Ohc ¡digame! and TheDJ – Further research indicates that the problem occurs only when a link is involved and is not limited to Chrome! The same issue arises with Firefox, Opera and MS Edge (I didn't check other browsers). I'm not sure if they all use the same display engine, but it would logical if they did and my tests appear to confirm this. None of the aforementioned software exhibits the same behaviour in Windows 10, so it seems more appropriate to file an Android bug report. If TheDJ concurs with my analysis then I'll do so ASAP.
Thanks to TheDJ for your observations re. bots, but are they really that stupid? (I'm not disagreeing with you, I'm just surprised) Could you point me to some documentation on this, especially with respect to the use of"display names"?
{
Infobox Images with transparent areas needing a different background color
talk) (For article Made with Code, image File:MwC_Logo-pink.png should have a black background
- is this possible ?
- http://web.archive.org/web/20180308230527/http://www.madewithcode.com/
- Template:Infobox organization
- Module:Infobox
- Module:InfoboxImage says:
- {{#invoke:InfoboxImage | InfoboxImage | image={{{image}}} | size={{{size}}} | maxsize={{{maxsize}}} | sizedefault={{{sizedefault}}} | upright={{{upright}}} | alt={{{alt}}} | title={{{title}}} | thumbtime={{{thumbtime}}} | link={{{link}}} | border=yes | center=yes | page={{{page}}} }}
For article Made with Code, image File:MwC_Logo-pink.png should have a black background
- is this possible ?
- http://web.archive.org/web/20180308230527/http://www.madewithcode.com/
- Template:Infobox organization
- Module:Infobox
- Module:InfoboxImage says:
- {{#invoke:InfoboxImage | InfoboxImage | image={{{image}}} | size={{{size}}} | maxsize={{{maxsize}}} | sizedefault={{{sizedefault}}} | upright={{{upright}}} | alt={{{alt}}} | title={{{title}}} | thumbtime={{{thumbtime}}} | link={{{link}}} | border=yes | center=yes | page={{{page}}} }}
Duplicate IP address to mine? UPDATE: Answer, No. New question: Why is Wiki alerting me when edits from IP addresses similar to mine are posted, before I'm logged in?
talk) (There seems to be a duplicate IP address to mine; or mine's being used somehow also by another IP-number editor (vandal-type edits). I don't know how to confirm my own IP address from my machine but I get occasional messages when I come onto Wiki without being logged in. (My IP number as triggered/used by this imposter (?) is 12-digits -- four groups, three digits each; which per IP address doesn't necessarily look legit.) (Somehow there's only one recorded edit to this 12-digit IP number currently, and the edit is recent; but I've certainly encountered (a) similar situation(s) one or more time(s) before, too.) Does this outline a familiar problem at all, with a solution? Thanks!
talk) (@Swliv:, Four groups, 3 digits each sounds exactly right for an IP address. Each group can be 1 to 3 digits, from 0 to 255. There are a lot of reasons that this could happen. Mobile internet for one, or perhaps a shared internet such as at a university, business, library, or other similar places. Could also be someone else at the same house.
Getting text to display on mobile only
talk) (Apologies I've been asking a bunch of technical questions here recently, but one more: what do I wrap a span of text with so that the text displays only to mobile users? (I'm looking to link editors who visit tutorials on mobile to Wikipedia:Editing on mobile devices.)
talk) (@Sdkb: You can use Template:Tlx -
Why does this link show up?
talk) (Hello. If you open Template:Infobox power station in edit mode (not visual editor), you can see "Geothermal well" being listed under "Pages transcluded onto the current version of this page" (under the edit box). Any idea what part of the code is causing that? I can't seem to figure out.
talk) (All I can work out at the moment is that the transclusion is on the doc page: Template:Infobox power station/doc.
talk) (The doc page calls the template with qid = Q693330
and this triggers it in data22
. Here is a minimal example where it stops if any part is removed: {{#invoke:WikidataIB |getValue |P527 |qid=Q693330 |fetchwikidata=ALL|onlysourced=no}}
. The code produces: Script error: No such module "WikidataIB".
This page now transcludes Geothermal well. wikidata:Q693330#P527 says "geothermal well". Module:WikidataIB by RexxS transcludes this name. I don't know why.
The doc page calls the template with qid = Q693330
and this triggers it in data22
. Here is a minimal example where it stops if any part is removed: {{#invoke:WikidataIB |getValue |P527 |qid=Q693330 |fetchwikidata=ALL|onlysourced=no}}
. The code produces: Script error: No such module "WikidataIB".
This page now transcludes Geothermal well. wikidata:Q693330#P527 says "geothermal well". Module:WikidataIB by RexxS transcludes this name. I don't know why.
talk) (When the function in WikidataIB retrieves the value of the property Template:Q from Template:Q, it finds "Q61846297". That has to be transformed into a readable English term, linked to an article, where possible. So the function looks at Template:Q for a sitelink. If one is found, it uses it: no problem. But what if there is no sitelink (as in the case of Q61846297? Then it looks for a label (and finds "geothermal well" in this case). So can we link to an article using that label? There are three remaining possibilities: (1) the article of that name is a dab page, so we don't link; (2) the article of that name is a redirect, so we do link; (3) there is no article of that name, so we don't link. The only way to determine which of the three possibilities is correct is to examine the title object for that name, and unfortunately, that marks the page that the title object refers to as transcluded onto the page that calls it, whether the page that the title object refers to exists or not. So we either accept the erroneous transclusion, or we lose the functionality to link redirects. That would mean that we would lose two of the three links from calls like the one which returns Template:Q from Template:Q:
{{wdib|fwd=ALL|osd=n|P106|qid=Q133682}}
→ Template:Wdib
Only Anthropologist exists as an article on enwiki, so has a sitelink from Template:Q. The other two are redirects with no sitelinks from Wikidata, and that situation is quite common here. --
When the function in WikidataIB retrieves the value of the property Template:Q from Template:Q, it finds "Q61846297". That has to be transformed into a readable English term, linked to an article, where possible. So the function looks at Template:Q for a sitelink. If one is found, it uses it: no problem. But what if there is no sitelink (as in the case of Q61846297? Then it looks for a label (and finds "geothermal well" in this case). So can we link to an article using that label? There are three remaining possibilities: (1) the article of that name is a dab page, so we don't link; (2) the article of that name is a redirect, so we do link; (3) there is no article of that name, so we don't link. The only way to determine which of the three possibilities is correct is to examine the title object for that name, and unfortunately, that marks the page that the title object refers to as transcluded onto the page that calls it, whether the page that the title object refers to exists or not. So we either accept the erroneous transclusion, or we lose the functionality to link redirects. That would mean that we would lose two of the three links from calls like the one which returns Template:Q from Template:Q:
{{wdib|fwd=ALL|osd=n|P106|qid=Q133682}}
→ Template:Wdib
Only Anthropologist exists as an article on enwiki, so has a sitelink from Template:Q. The other two are redirects with no sitelinks from Wikidata, and that situation is quite common here. --
talk) (@RexxS: Thanks for the detailed explanation. Red links under the list of transcluded pages is often a sign that something is wrong and should be examined. The module could start with an #ifexist check on the target page and stop if the page doesn't exist. I don't know Lua but in templates, #ifexist does not affect the source page and only causes a WhatLinksHere entry (as a link and not transclusion) for the target page. That is less likely to cause concern.
talk) (@PrimeHunter: The code (lines 630-639) already calls artitle = mw.title.new( label_text, 0 )
which returns a title object or nil if it doesn't exist in mainspace, so we know whether the article exists without any of the false links caused by #ifexist
. However, you still have to check if artitle.redirectTarget
exists, which is the cheapest way of distinguishing a dab link from a redirect at that point. It's examining that property that makes the false transclusion. I think you'll find that it's far better to have an false transclusion than a false file link, as bots look for the latter and report them. I've already been down that route, and believe me, the current algorithm is the result of lots of debate and amendment. Cheers --
artitle = mw.title.new( label_text, 0 )
which returns a title object or nil if it doesn't exist in mainspace, so we know whether the article exists without any of the false links caused by #ifexist
. However, you still have to check if artitle.redirectTarget
exists, which is the cheapest way of distinguishing a dab link from a redirect at that point. It's examining that property that makes the false transclusion. I think you'll find that it's far better to have an false transclusion than a false file link, as bots look for the latter and report them. I've already been down that route, and believe me, the current algorithm is the result of lots of debate and amendment. Cheers --talk) (@RexxS: I meant to only stop if no page exists. If it exists then it's OK to examine what it is. You say mw.title.new( label_text, 0 ) returns nil if it doesn't exist in mainspace. If that's how mw.title.new works then I don't see why you have to use artitle.redirectTarget
when the page does not exist. But mw:Extension:Scribunto/Lua reference manual#mw.title.new says it only returns nil "If the text is not a valid title". I think this means it e.g. contains disallowed characters and not that there is no page by that name. As said, I don't know Lua but maybe it does have to either "link" or "transclude" the page to examine whether it exists. If we prefer to "transclude" then the documentation could mention it. I searched Module:WikidataIB for "transclu" while examining the original post.
artitle.redirectTarget
when the page does not exist. But mw:Extension:Scribunto/Lua reference manual#mw.title.new says it only returns nil "If the text is not a valid title". I think this means it e.g. contains disallowed characters and not that there is no page by that name. As said, I don't know Lua but maybe it does have to either "link" or "transclude" the page to examine whether it exists. If we prefer to "transclude" then the documentation could mention it. I searched Module:WikidataIB for "transclu" while examining the original post. talk) (@PrimeHunter: I just checked and you're right about the meaning of "valid title". The code it affects is line 633 if artitle and artitle.redirectTarget ...
. I can change that to if artitle.exists and artitle.redirectTarget ...
which will not evaluate the second part if the first is false, i.e. we don't get the false transclusion, but we do get the false file link by testing artitle.exists
, so I don't think we're any better off, sorry. If you'd like me to make the change, it's just a moment's work, but I think we'll simply get the same complaints about file links instead of transclusions every time we do this:
{{wdib|ps=1|P527|qid=Q693330}}
→ Template:Wdib
What do you recommend? --
@PrimeHunter: I just checked and you're right about the meaning of "valid title". The code it affects is line 633 if artitle and artitle.redirectTarget ...
. I can change that to if artitle.exists and artitle.redirectTarget ...
which will not evaluate the second part if the first is false, i.e. we don't get the false transclusion, but we do get the false file link by testing artitle.exists
, so I don't think we're any better off, sorry. If you'd like me to make the change, it's just a moment's work, but I think we'll simply get the same complaints about file links instead of transclusions every time we do this:
{{wdib|ps=1|P527|qid=Q693330}}
→ Template:Wdib
What do you recommend? --
talk) (@RexxS: I only know Wikipedia:Most-wanted articles which is updated around once yearly, usually by Bamyers99. Are there other reports of red links to mainspace?
id
and isRedirect
object properties, and I had complaints about entries appearing in the "What links here" section of articles as file links. Template:Pb I've now found the discussion at Module talk:WikidataIB/Archive 3 #New parameter for getValue sought to avoid attempt to resolve redirects. This was @ferret:'s last observation: Template:Tq. My reply now seems quite prescient: Template:Tq Complaints to Anomie, then. --Instant Search Suggestion disappeared
talk) (Some time ago, my search box on the Wikipedia homepage stopped working (www.wikipedia.org), i.e. no drop-down suggested searches appear when I type in the search box. Perhaps this was the result of a change in the default account settings; perhaps it is an issue that can be resolved by wiping my history and cache? I do not know. If someone who is more technologically proficient than I can assist, it would be much appreciated.
talk) (@Ergo Sum: Please try purging your cache and trying again. If the issue persists, can you provide what web browser you use, and any errors that appear in the javascript console?
Rdcheck tool not working
talk) (Seems that the Rdcheck tool is not working at the present time. Whenever a page is checked for incoming redirects, the search fails due to python coding errors.
talk) (Does anyone know what is going on with this and/or if/when/how this is getting fixed?
talk) (...?
talk) (This use of Rdcheck to find incoming links to the Combinatorics article seems to work.
Something is wrong with {{sfn}} and {{sfnp}}
talk) (Looks like Template:sfn has become badly broken. It is making red links and duplicate reference errors in tens of thousands of articles. Templates which use it in turn (like {{Listed building England}}) are also having problems. This seems to affect tens of thousands of articles. Is a fix coming, or is this some disruptive change that editors must manually repair in each affected article? --
talk) (
talk) (Sorry, should have pinged @Mikeblas:.
talk) (That discussion seems to be about problems where the sfn template isn't given a correct anchor for the reference. As far as I can tell, the Listed building England template correctly specifies a ref parameter. In the Harty article, for example, each of the cited buildings has the expected "CITEREFHistoric_England463505" (for example) anchor. The page still has six different "sfnp error" messages, and a "duplicate ref def" message, to boot. Maybe I'm missing some connection, but this issue seems far more than the change discussed at the Administrator Noticeboard mentions. --
Unfilled template parameters
talk) (In Template:Infobox heteropolypeptide some parameters look mandatory and when unfilled are shown like unhelpful {{{protein_type}}}, {{{subunit1}}} etc (eg. when data itself is deficient), as seen in Hemolithin. Is it possible to make them optional?
Want to start scripting
talk) ((ColinFine from the Teahouse suggested I post this here) Hi. I would like to start writing user scripts for Wikipedia. I've read through the guide but I'm more of a visual learner so I was wondering if you could point me in the direction of any videos etc or any experienced user willing to walk me though it. Thanks,
talk) (@RealFakeKim: you should start by looking over Wikipedia:WikiProject JavaScript and Wikipedia:User scripts. Our programming is primarily using: javascript and cascading style sheets. If you are not familiar with these concepts and languages, W3Schools (external site) is a good place to start learning for free. —
Let's update all our COVID-19 data by bot instead of manually
talk) (Johns Hopkins University has created a public data set of all COVID-19 infections/deaths/recovered/active for all counties, states, and countries, updated and archived daily and sourced to reliable sources. While our editors have been doing an admirable job of updating all our numbers manually, the effort has not been 100% reliable or consistent, plus there's a pretty good chance you'll get an edit conflict when you try to edit the data because so many people are messing with it. I would like to propose that someone write a bot to automatically pull the data from the John Hopkins data source and use it to update tables such as Template:2019–20 coronavirus pandemic data and Template:2019–20 coronavirus pandemic data/United States medical cases by state. It could even be used to update the county lists in state outbreak articles, but that's probably lower priority. Any thoughts about this? Any volunteers?
talk) (I can write python/pywikibot code that will do the updates, but it's important that it fetches the information from a source that the community trusts, and that doing bot updates is what the community wants. I can set this up tomorrow if that would help. Thanks.
talk) (The license statement at the bottom of that Github readme is not compatible with CC by SA (the license is non-commercial). --
talk) (Yes unfortunately it says "copyright 2020 Johns Hopkins University, all rights reserved". I think in cases like this we allow for periodic usage within citations per fair use, but not systematically copying large parts of the database into Wikipedia ie. we are not competing with JHU by hosting their data here. It says at the top they acquired some of the data from public sources, so if we can determine those sources and use that data, JHU can not copyright PD data such as from the govt --
talk) (@GreenC: Raw data is not eligible for copyright in the United States and is therefore in the public domain. Regardless of their assertion, they do not have copyright and US law is clear that "facts that were discovered" are not sufficiently creative to qualify for copyright protection (see Feist v. Rural) no matter how much work went into compiling them.
talk) (But do note that GreenC's is correct in saying the "systematically copying large parts of the database into Wikipedia" is not allowed. Copyright extends to the specific presentation format of the data, and so simply copy and pasting it is a violation of copyright law. So long as the data are rearranged or modified, there is no copyright infringement from simply using the underlying public domain data. Doing this to create a competing product is allowed and not infringing.
talk) (@Mdennis (WMF): would WMF Legal be able to provide guidance on this?
talk) (@Wugapodes: Maggie's not in the office at the moment but I can chase this up.
talk) (Thanks @JSutherland (WMF):, anyone from WMF legal would be helpful, I just chose Maggie as the first person from the list since Template:Noping's userpage says it's for taking actions not getting in touch with the team.
talk) (Template:Noping has been approved for trial to do something similar to this already. See Wikipedia:Bots/Requests for approval/WugBot 4. If there is consensus here, I can update the request and modify the bot to update more pages. Or, alternatively, we could probably write a lua module which parses the already available CSV data into a wikitable (I would even bet someone has already written a module like that). I don't have a strong opinion on the proposal, but am willing to help if there's consensus for it. Template:Ec
talk) (It seems like @Tedder: has some sort of tool that they're using to create and maintain the case charts. I have no more information about it, since there's no BRFA or documentation. --
talk) (What are they doing over at Wikidata with Covid data? --
Months
talk) (Hi! When I translated one Template:Interactive COVID-19 maps/Cumulative confirmed cases to Ukrainian I had a problem. System "translated" 12/03/20 as Mar 12. But March in Ukrainian won't be March. So I want to ask you in which page in English wiki I can find this process and fix it in Ukr. wiki. Thanks!--
talk) (@Dimon2712: Where do you see 12/03/20 or Mar 12? It transcludes {{Interactive COVID-19 maps/common}} which says "03/28/20" and "Mar 28" directly in the source.
talk) (@PrimeHunter: thanks, but in source is only Mar 28. I said about dates at all. Please look at this template in ukwiki. If you drag the circle you will see that "Бер" was changed to "Mar"--
talk) (@Dimon2712: Unfortunately, I think this is blocked by phab:T100444. --
talk) (You can get month numbers instead of names by changing timeFormat('%b %d',scaledHandlePosition)
in the local {{Interactive COVID-19 maps/common}}. Based on Date and time notation in Ukraine you may want timeFormat('%d.%m',scaledHandlePosition)
. This gives 12.03 instead of Mar 12. '%d.%m.%y'
gives 12.03.20.
timeFormat('%b %d',scaledHandlePosition)
in the local {{Interactive COVID-19 maps/common}}. Based on Date and time notation in Ukraine you may want timeFormat('%d.%m',scaledHandlePosition)
. This gives 12.03 instead of Mar 12. '%d.%m.%y'
gives 12.03.20. talk) (@PrimeHunter: thank you! I've done it.--
Can you help with improving the Coronavirus epidemic maps?
talk) (Hi all
The maps on the Coronavirus pandemic articles need some technical help which has been documented on phabricator, I think that some of the tasks relate to templates, so please take a look.
If you're not able to assist with any of the tasks please still subscribe to it, it will help software engineers understand there is community support for fixing these issues.
Thanks very much
Hi all
The maps on the Coronavirus pandemic articles need some technical help which has been documented on phabricator, I think that some of the tasks relate to templates, so please take a look.
If you're not able to assist with any of the tasks please still subscribe to it, it will help software engineers understand there is community support for fixing these issues.
Thanks very much
talk) (Some of those tasks in bug T248707 can be done with an mw:Extension:Graph map, see mw:Extension:Graph/Demo.--
talk) (Hi @Snaevar: could you please write on the phabricator task how they could be acccomplished? It seems like the first issue is knowing which bit of software the bug is coming from. Thanks,
Source code edit
talk) (Hi,
I am curious the reason for the 2010 wikitext editor (with editing toolbar on) in affecting the editing area, as the source code became styled unnecessarily. The only way for me to fix that is to dissable the editor. Are there anyone experiencing the same issue as me?--
Hi,
I am curious the reason for the 2010 wikitext editor (with editing toolbar on) in affecting the editing area, as the source code became styled unnecessarily. The only way for me to fix that is to dissable the editor. Are there anyone experiencing the same issue as me?--
talk) (It sounds like Wikipedia:Syntax highlighting. You probably clicked a highlighter marker button
to the left of "Advanced" in the toolbar.

An inter-linguage link missing?
talk) (Hello to all the confined wikiistes on the Earth. The missing link is about this article in French fr:Réacteur_nucléaire_naturel_d'Oklo in which the inter-language link towards English is ineffective.
I insist on the fact that the article in English, Natural nuclear fission reactor, approxitively about the same subject, has got a link towards the equivalent in French that is effective.
I already sent this problem to your French equivalent here:
fr:Wikipédia:Le_Bistro/26_mars_2020#Lien_inter_langues_manquant_(suite)
, but I'm not able to understand the bla-bla (rabbiting) they write.
Thank you for your explainations (if I understand them, then I'll try to repair the thing) or for fixing the bug.
I insist on the fact that the article in English, Natural nuclear fission reactor, approxitively about the same subject, has got a link towards the equivalent in French that is effective. I already sent this problem to your French equivalent here: fr:Wikipédia:Le_Bistro/26_mars_2020#Lien_inter_langues_manquant_(suite) , but I'm not able to understand the bla-bla (rabbiting) they write.
Thank you for your explainations (if I understand them, then I'll try to repair the thing) or for fixing the bug.
talk) (Template:Edit conflict There seem to be two wikidata items (one capitalised): Natural nuclear fission reactor (d:Q12029714) and natural nuclear fission reactor (d:Q64470499). —
{{interwiki extra|qid=Q12029714}}
in Natural nuclear fission reactor#External links. The French Wikipedia has a similar template so without merging the Wikidata items you could add {{interwiki extra|qid=Q64470499}}
to fr:Réacteur nucléaire naturel d'Oklo. Then the English and French article would both link all the other articles, but many of the other articles would still only link their own type. Discussion at Talk:Visual pollution#Old revision is shown for logged out users

Appeal for Peer Review
talk) (I have recently finished a user script which would help file movers when moving files. See the request here. From WP:File movers:
The script is called FileMoverHelper, and can be found here. Here is a synopsis:
- Get file move destination.
- Move file to destination.
- Remove {{Rename media}} template from destination (if there is one).
- Find backlinks and redirect them to the new file name.
I appeal for peer review in order to prevent any unnecessary disruption that a faulty script of this kind may cause. Kindly leave any comments on the talk page. Regards,
I have recently finished a user script which would help file movers when moving files. See the request here. From WP:File movers:
The script is called FileMoverHelper, and can be found here. Here is a synopsis:
- Get file move destination.
- Move file to destination.
- Remove {{Rename media}} template from destination (if there is one).
- Find backlinks and redirect them to the new file name.
I appeal for peer review in order to prevent any unnecessary disruption that a faulty script of this kind may cause. Kindly leave any comments on the talk page. Regards,
talk) (@Guywan:, you should also match file names not in links such as images in infoboxes, this will also allow you to match the Image: namespace that might be used. I don't think people are including filenames as text on pages. Your regex should also match uppercase and lower case first letter as well as underscores instead of spaces and it should escape regex such as .
s. I would just do something like new RegExp('[' + source.charAt(0).toUpperCase() + source.charAt(0).toLowerCase() + ']' + mw.util.escapeRegExp(source.slice(1)).replace(/[ _]/g,'[ _]'))
. You should use list=imageusage to get all image uses.
.
s. I would just do something like new RegExp('[' + source.charAt(0).toUpperCase() + source.charAt(0).toLowerCase() + ']' + mw.util.escapeRegExp(source.slice(1)).replace(/[ _]/g,'[ _]'))
. You should use list=imageusage to get all image uses. file:
and there's not need to use `${destination}`
just use destination
. talk) (@BrandonXLF: Thanks for the help! I knew there would be some problems with the regular expressions. Template:Tq You're probably right about that; I'm nothing if not paranoid. Do you think the gallery regex is even necessary?
talk) (@Guywan:, it's no longer needed. For part 4 of the script, you should use continue/iucontinue to replace all results. When the call is finished, if result.continue is present, you should make the call again using result.continue as the start of the new object you pass to mw.Api. If result.continue is not present, then there's no more calls that need to be made. You should also consider the edit rate limit when making the replacements. You can get the edit rate from meta=userinfo passing uiprop=ratelimit. It will be contained in query.userinfo.ratelimits.edit.type_of_user.hits and query.userinfo.ratelimits.edit.type_of_user.seconds. Store hits/seconds as editrate. You can then run an interval with an interval of 1000/editrate that will check if there's an edit to be made in the queue and it will make that edit else it will clear the interval.
talk) (@BrandonXLF: Thanks again. I've implemented your suggestions. Do you know where the rate limits of different user groups are defined?
talk) (@Guywan:, the API returns the correct edit rate for the user calling the API, although I'm not sure what it does for admins who don't have an edit rate limit from what I can tell. The settings are at https://noc.wikimedia.org/conf/highlight.php?file=InitialiseSettings.php.
talk) (For admins, it doesn't return anything (data.query.userinfo.ratelimits
is an empty object - just tried it out on the testwiki). So at present the script would give syntax error if an admin uses it.
data.query.userinfo.ratelimits
is an empty object - just tried it out on the testwiki). So at present the script would give syntax error if an admin uses it. talk) (@Guywan: I'm not much of a scripting expert, but I know enough to bring up a concern. So, in your script, I see no references specifically for replacing file links (referencing "Template:Tq" in your items above) that may contain an "Image:" or "Media:" prefix instead of the traditional "File:" namespace prefix. Well, specifically, I don't see the words "Image" or "Media" anywhere in the script, which is why I assume that my aforementioned concern is valid. I have this concern because in regards to at least the "Image:" file link prefix, it is still being used on several pages' file links. Is this addressed in the script?
talk) (@Steel1943: Your concern is valid. I think that, working from BrandonXLF's feedback, I have addressed it. I did not consider the "Image:" prefix, because I made the assumption that editors would behave and not use it. I wasn't aware of a "Media:" prefix; where is it used?
talk) (@Guywan: For your script to load reliably, you need to make sure that the dependencies are loaded before the script code is run. You can do this by replacing $(() =>
with $.when($.ready, mw.loader.using(['mediawiki.util', 'mediawiki.api', 'mediawiki.user', 'mediawiki.notify'])).then(() =>
$(() =>
with $.when($.ready, mw.loader.using(['mediawiki.util', 'mediawiki.api', 'mediawiki.user', 'mediawiki.notify'])).then(() =>
talk) (@SD0001: Thanks for the help. I've never used mw.loader.using()
in any of my scripts, and they seem to run fine. Could you clarify what it means to load reliably?
mw.loader.using()
in any of my scripts, and they seem to run fine. Could you clarify what it means to load reliably? Template:Closing
talk) (Could somebody who understands template magic look at Template:Closing. The {{#if:{{{admin|}}}...}} is obviously supposed to be switching between an admin close vs a NAC, but it always comes up NAC. --
talk) (@RoySmith: Looks like it's working? See Special:PermaLink/948377314. --
talk) (
talk) (@RoySmith: You didn't pass in anything in the Template:Para parameter so it got parsed as empty making the #if-case false. Try something like Template:Tlx. --
talk) (@QEDK:, I'll try that the next time I use it, but I don't recall ever having this trouble before.I see @Wugapodes: made a recent change to the template, maybe that caused the change in behavior? I note the template documentation says, "This template takes no parameters."; maybe that's just wrong? --
talk) (Yes, that was the purpose of the change I made. I felt the previous wording of "admin or other editor" was redundant. If there's a need to specify that the closer is an administrator, it should state that unequivocally, otherwise it defaults to the catch-all term "editor". I guess I forgot to change the documentation. Template:Ec
Rotated file at Church of Saint Mary of Jesus
talk) (Does anybody understand why the file in the infobox appears to be 90 degrees rotated? The original on Commons is not rotated.--
talk) (@Ymblanter: It looks OK for me, can you confirm if the issue still exists. For anyone else, no recent changes to the file, article or the infobox, and nothing else that I can check comes to mind. --
talk) (Yes, for me it still exists. I checked indeed that there have been no recent changes.--
talk) (@Ymblanter:, It looks correct to me, but I've seen something like this before. I vaguely remember it being cache related, so perhaps try all the usual suspects of emptying your browser cache, purging the page, etc? --
talk) (Thanks. I have never visited this page before, so I do not quite understand where the cache problems could come from, but I can indeed try to wait for several days,. which is typical expiration time for image cache.--
talk) (Client-side (browser) caches can be cleared anytime (setting dependent on browser). Purging the server-side cache is done on a "as needed" basis, the purge action will do that instantly. --
talk) (@QEDK:, Multi-level caching is 1) a good way to make things efficient and 2) a good way to make things confusing :-). I suspect the sequence of events is something like: 1) the server cached the wrong version. 2) Ymblanter viewed the page and now his browser cached the wrong version as well. 3) The server-side cache got invalidated and refreshed so it now had the correct version. 4) You and I looked at the page and saw it correctly, even though Ymblanter is continuing to see the stale version out of his browser cache. --
talk) (The idea is that the server will be serving the image (now) with a different cache header, which would effectively tell the browser that the image stored in its cache is out-of-date and the browser would ideally re-download the new image (all modern browsers will atleast), so unless it's a particularly ancient browser with strict time-based cache headers, who knows. --
talk) (@QEDK:, Well, that's certainly the theory. In practice, however, I see enough problems where the fix is to manually purge the cache, that I have to assume something is broken in the WikiMedia server-side cache management implementation. --
Script for showing only the red lines in lists
talk) (Template:Moved from
A list can contain links to Wikipedia articles and then is it possible to make a script for filtering the list and to remove the lines not containing red links?
For example this list.
Say for example these 4 lines:
- Big Springs - Big Springs, Ohio
- Billingstown - Billingstown, Ohio
- Birds Run - Birds Run, Ohio
- Birmingham - Birmingham, Ohio
After applying the filter, only the 2nd and the 3rd line should be visible (Billingstown and Birds Run are red links at this moment):
Or, to make it easier, say the list contain only one link per line (red or blue):
The result would be:
The script should be a gadget or a Greasemonkey script or maybe even a bookmarklet. What's the easiest way to implement such a filter and where should I ask for help for someone to create such a script?
Thanks.
—
Template:Moved from A list can contain links to Wikipedia articles and then is it possible to make a script for filtering the list and to remove the lines not containing red links?
For example this list.
Say for example these 4 lines:
- Big Springs - Big Springs, Ohio
- Billingstown - Billingstown, Ohio
- Birds Run - Birds Run, Ohio
- Birmingham - Birmingham, Ohio
After applying the filter, only the 2nd and the 3rd line should be visible (Billingstown and Birds Run are red links at this moment):
Or, to make it easier, say the list contain only one link per line (red or blue):
The result would be:
The script should be a gadget or a Greasemonkey script or maybe even a bookmarklet. What's the easiest way to implement such a filter and where should I ask for help for someone to create such a script? Thanks. —
talk) (@Ark25:, I have two versions javascript:$('.mw-parser-output li').each(function(){if(!$(this).find('a.new').length)this.innerHTML=''})
and javascript:$('.mw-parser-output li').each(function(){if(!$(this).find('a.new').length)this.style.display='none'})
. The first one makes the content of the list items without redlinks disapear whereas the second one makes list items without redlinks disapear.
javascript:$('.mw-parser-output li').each(function(){if(!$(this).find('a.new').length)this.innerHTML=''})
and javascript:$('.mw-parser-output li').each(function(){if(!$(this).find('a.new').length)this.style.display='none'})
. The first one makes the content of the list items without redlinks disapear whereas the second one makes list items without redlinks disapear. talk) (Template:Reply to These are really great scripts, thank you! The second is more useful for me because the result is more compact but both are great scripts.
Is there any chance you can make another script to remove elements not containing a certain string? Say for example the string is "ir" - then only the last 2 items in the list would remain. Thanks in advance. —
Template:Reply to These are really great scripts, thank you! The second is more useful for me because the result is more compact but both are great scripts.
Is there any chance you can make another script to remove elements not containing a certain string? Say for example the string is "ir" - then only the last 2 items in the list would remain. Thanks in advance. —
talk) (@Ark25:, would something like javascript:$('.mw-parser-output li').css('display','');if(str = prompt('Please enter a string to search for:'))$('.mw-parser-output li').each(function(){if(!$(this).text().includes(str))this.style.display='none'})
work?
javascript:$('.mw-parser-output li').css('display','');if(str = prompt('Please enter a string to search for:'))$('.mw-parser-output li').each(function(){if(!$(this).text().includes(str))this.style.display='none'})
work? talk) (@BrandonXLF: It works really great, it's amazing what JavaScript can do these days! Yesterday I've posted the same question on Stackoverflow too, can I/you post the script there too? Not sure why the script doesn't work in that particular page but it works very well here on Wikipedia. —
javascript:(function(){for(var a=document.getElementsByTagName("li"),b=prompt("Please enter a string to search for:"),c=0;c<a.length;c++)a[c].style.display=a[c].innerText.includes(b)?"":"none"})()
. talk) (@BrandonXLF: Again, a great script, yes it works very well, everywhere. I've posted it on StackOverflow. Thank you very much! —
\[\[(.*)\]\]
at "Search for" and {{subst:#ifexist:$1||[[$1]]}}
at "Replace with:". Checkmark "Treat search string as a regular expression" and click "Replace all". Then save the page, or preview if that's enough for your purpose. ifexist is an expensive parser function so at most 500 are allowed at a time. talk) (For those of us who don't really know what we're doing, how can we make use of this script? —
@Rhododendrites: Just add a bookmark in your browser and instead of providing an URL (web link), put the script there. To add a bookmark, right click on the bookmark bar. If you don't see the bookmark bar then you probably don't have any bookmark in it so add the first one with (CTRL+D).
- Also, this is what Google search on "how to make bookmarklet" returns:
In Chrome, click Bookmarks->Bookmark Manager.
You should see a new tab with the bookmarks and folders listed.
Select the “Bookmarks Tab” folder on the left.
Click the “Organize” link, then “Add Page” in the drop down.
You should see two input fields. ...
Paste the javascript code below into the second field.
—
Number of RFCs started per user
talk) (The WP:RFC process has a Tragedy of the commons problem: we can accommodate a bad or unnecessary RFC here or there, but the more RFCs we have, and the lower the average value of the RFC, the less anyone wants to participate at all, or the less thoughtful their responses will be.
We are discussing some ways to improve this. One of the proposals is to limit the number of RFCs an editor can open in a month. The idea is to limit the "outlier" behavior, from the small handful of editors who create many RFCs, rather than to bother the ordinary editor (median RFCs started per year = 0). It seems to me that even among people who use the RFC process, it's unusual to create more than about three RFCs in a month. But I'd rather have "official records" instead of just telling people that I think most people won't start more than a couple, and that creating 10 in a month (which a now-banned editor did, a year or two ago) is a highly unusual and probably bad event.
I wonder whether anyone here could find out just how many RFCs each editor has started during a month (e.g., via database dumps). For some years, the RFCs have all been given a unique id number, and they're all listed at the RFC subpages. What I imagine would happen is that you search for the id number and then record the first username/link and the date after the RFC id (a few are signed with just a date, but it's not been common in recent years). The id numbers should prevent problems with double-counting a single RFC multiple times (e.g., if it's listed as history and science).
What I'd like probably looks like this:
RFC record
User
Highest number opened in 30-day period
Alice
2
Bob
1
WhatamIdoing
5
Is that possible?
The WP:RFC process has a Tragedy of the commons problem: we can accommodate a bad or unnecessary RFC here or there, but the more RFCs we have, and the lower the average value of the RFC, the less anyone wants to participate at all, or the less thoughtful their responses will be.
We are discussing some ways to improve this. One of the proposals is to limit the number of RFCs an editor can open in a month. The idea is to limit the "outlier" behavior, from the small handful of editors who create many RFCs, rather than to bother the ordinary editor (median RFCs started per year = 0). It seems to me that even among people who use the RFC process, it's unusual to create more than about three RFCs in a month. But I'd rather have "official records" instead of just telling people that I think most people won't start more than a couple, and that creating 10 in a month (which a now-banned editor did, a year or two ago) is a highly unusual and probably bad event.
I wonder whether anyone here could find out just how many RFCs each editor has started during a month (e.g., via database dumps). For some years, the RFCs have all been given a unique id number, and they're all listed at the RFC subpages. What I imagine would happen is that you search for the id number and then record the first username/link and the date after the RFC id (a few are signed with just a date, but it's not been common in recent years). The id numbers should prevent problems with double-counting a single RFC multiple times (e.g., if it's listed as history and science).
What I'd like probably looks like this:
User | Highest number opened in 30-day period |
---|---|
Alice | 2 |
Bob | 1 |
WhatamIdoing | 5 |
Is that possible?
talk) (@WhatamIdoing: The RfC ID is only for currently-running RfCs (according to Legobot tracking Template:Tlx usage). If we track RfCs with 7-hexadigit hashes similar to Legobot, by ~2300 RfCs, we would reach the number of RfCs where the collision rate will be too high (people will start noticing, bot will be confused) to use the hashes for any realistic purpose. Simply put, even if you want the hashes as a tracking mechanism in the future, you have to change the bot hash function to produce longer hashes. --
talk) (qedk, I thought that the id numbers basically started at zero and counted up in order. If there are only ~2300 numbers in use, then I think that would get approximately two years' worth of data. That would be a good starting point. (Also: this is a one-off to validate my impressions. I don't need daily updates forever or anything like that.)
talk) (@WhatamIdoing: The actual numbers would ideally be 16^7 (total range of the function: 0000000->FFFFFFF). At around ~2300, you'd probably start noticing collisions, where two RfCs have some probability of having matching hashes. Either way, two simple ways would be: a tracking bot to go after Legobot's Template:Para while they are in-use. The other is to make Legobot itself do the job. --
talk) (The rfcid numbers are not re-used. If desired, I can rig up a demonstration of why they must not be re-used, except when the second use is on the same page that the first use occurred.
Legobot maintains a permanent table of rfcids that it has issued in the past, and it uses this when generating a new rfcid to ensure that there is no collision (if you're interested, the table is described here, it's the first one of the five). So 16^7 are certainly possible, although as time goes by the process of generating a fresh rfcid will slow down as the collision rate increases. Some rfcs have had more than one rfcid: there are several reasons that this might happen, and the most obvious one is when a Template:Tlx tag is removed (perhaps by Legobot itself, due to thirty days having elapsed) and then re-added without the Template:Para parameter (perhaps to extend the rfc). Another reason is so that an RfC may need to be removed from an inappropriate category - for example, if somebody uses Template:Tlx without any parameters at all, Legobot will put it into Wikipedia:Requests for comment/Unsorted, and the only way of getting it out again, without actually ending the RfC, is to remove the Template:Para and simultaneously add a valid category, as I did Template:Diff - Legobot followed that up with these edits. --
The rfcid numbers are not re-used. If desired, I can rig up a demonstration of why they must not be re-used, except when the second use is on the same page that the first use occurred.
Legobot maintains a permanent table of rfcids that it has issued in the past, and it uses this when generating a new rfcid to ensure that there is no collision (if you're interested, the table is described here, it's the first one of the five). So 16^7 are certainly possible, although as time goes by the process of generating a fresh rfcid will slow down as the collision rate increases. Some rfcs have had more than one rfcid: there are several reasons that this might happen, and the most obvious one is when a Template:Tlx tag is removed (perhaps by Legobot itself, due to thirty days having elapsed) and then re-added without the Template:Para parameter (perhaps to extend the rfc). Another reason is so that an RfC may need to be removed from an inappropriate category - for example, if somebody uses Template:Tlx without any parameters at all, Legobot will put it into Wikipedia:Requests for comment/Unsorted, and the only way of getting it out again, without actually ending the RfC, is to remove the Template:Para and simultaneously add a valid category, as I did Template:Diff - Legobot followed that up with these edits. --
talk) (I tried finding the source code, seems you're better at web-sleuthing than me! Doesn't look like the table maintains record of revisions/filer/description, so that's a bit unfortunate. OTOH, I did get to see a good bot of 'ol programmer humour:
$page->addSection('HELP! PLEASE!','Something is very wrong. I\'m having trouble generating a new RFC id. Please help me. --~~~~');
die();
@Redrose64: SELECT count(rfc_id) FROM rfc WHERE rfc_id=?;
This counts the number of entries in the rfc_id column with one character? I don't understand it. --
I tried finding the source code, seems you're better at web-sleuthing than me! Doesn't look like the table maintains record of revisions/filer/description, so that's a bit unfortunate. OTOH, I did get to see a good bot of 'ol programmer humour:
$page->addSection('HELP! PLEASE!','Something is very wrong. I\'m having trouble generating a new RFC id. Please help me. --~~~~');
die();
@Redrose64: SELECT count(rfc_id) FROM rfc WHERE rfc_id=?;
This counts the number of entries in the rfc_id column with one character? I don't understand it. --
talk) (@QEDK:, in mysqli when a ? is added to a prepare statement, it is replaced later on when execute is called or when bind_param is called, it acts as a placeholder. In this case it's checking for a duplicate rfc_id that is the same as $tempid in the database because the ? is replaced with $tempid.
Button coding with Lua
talk) (We'd like some help over at Wikipedia_talk:Teahouse#Suggestions_for_improving_the_Teahouse_design getting a custom button to be able to display a "pressed" state when linking to the page it's on the same as Template:Clickable button 2 already does. Would anyone who knows Lua be able to help? (ctrl+f for "depressed" and start reading from there.)
Lots of first level sections are not collapsible
talk) (Template:Tracked
Hi, in Legality of bestiality by country or territory article there are lots of first second level sections that are not collapsible. Can someone see where the problem is? I am using my phone and I can't find the problem.--
talk) (@SharabSalam:, Can you give us more details? Are you looking at this with a normal web browser on your phone? Does the URL contain "en.wikipedia.org" or "en.m.wikipedia.org"? Or are you using the Wikipedia App, and if so, Android or IOS? And, specifically, which section head do you think should be collapsed? --
talk) (- This is happening in my phone. It is android and I am using Chrome. Lots of
first second level sections in that article are not collapsible. The second third level sections have underlines which is not how a normal second third level section should look like. Here are some screenshots to illustrate the problem.
- Screenshot A is from the article. Notice that there is no downwards/upwards arrow and that there is an underline under the
second third level section header which is not normal.
- Screenshot B is from my sandbox. This is how sections should look like.
- All of the sections in that article are like this except I think two sections at the end. One is "Notes" and the other is "History". This is annoying especially because there is a very long section that is not collapsible called "National law" and you have to scroll down for 15-20 seconds to reach the end of the section.--
- This is happening in my phone. It is android and I am using Chrome. Lots of
firstsecond level sections in that article are not collapsible. Thesecondthird level sections have underlines which is not how a normalsecondthird level section should look like. Here are some screenshots to illustrate the problem.
- This is happening in my phone. It is android and I am using Chrome. Lots of
- Screenshot A is from the article. Notice that there is no downwards/upwards arrow and that there is an underline under the
secondthird level section header which is not normal. - Screenshot B is from my sandbox. This is how sections should look like.
- All of the sections in that article are like this except I think two sections at the end. One is "Notes" and the other is "History". This is annoying especially because there is a very long section that is not collapsible called "National law" and you have to scroll down for 15-20 seconds to reach the end of the section.--
- Screenshot A is from the article. Notice that there is no downwards/upwards arrow and that there is an underline under the
talk) (I see the same in Safari on an iPhone. A level 2 section has "==" and level 3 has "===". It's level 2 which isn't collapsible and level 3 which has underlines that aren't normally there. I have a collapse arrow at "Notes", "History" and "Zoophilic pornography" but the latter only collapses the first line with Template:Tlx. On the mobile version in desktop with Firefox I see the three navboxes in [14] so something odd is going on. I don't see them in preview and I never see navboxes on articles in the mobile version. They aren't supposed to be in the html in mobile so it's not just an unloaded CSS statement to hide them. A null edit didn't change anyhting. None of the unusual things appear in preview so it's difficult to investigate.
talk) (@SharabSalam:, Wow, this is really weird. I made a copy of the page in my userspace and started hacking away at chunks. Eventually I got down to this diff causing the problem to appear or not. I'm totally mystified.
What this smells like is running into some kind of size limit rather than any actual broken markup. I'm not familiar with the code base, but I could imagine something like reading the next X kb of text to see if you can find the start of the next level-2 section, and that failing if the next section head is further away than that. But that's just pure speculation. --
@SharabSalam:, Wow, this is really weird. I made a copy of the page in my userspace and started hacking away at chunks. Eventually I got down to this diff causing the problem to appear or not. I'm totally mystified.
What this smells like is running into some kind of size limit rather than any actual broken markup. I'm not familiar with the code base, but I could imagine something like reading the next X kb of text to see if you can find the start of the next level-2 section, and that failing if the next section head is further away than that. But that's just pure speculation. --
talk) (More on this being a server-side issue, the h2 tags are being generated differently. For example a bad one and a good one:
<h2 class="section-heading"><span class="mw-headline" id="Public_opinion">Public opinion</span></h2>
<h2 class="section-heading collapsible-heading open-block" tabindex="0" aria-haspopup="true" aria-controls="content-collapsible-block-3"><div class="mw-ui-icon mw-ui-icon-mf-expand mw-ui-icon-element mw-ui-icon-small mf-mw-ui-icon-rotate-flip indicator mw-ui-icon-flush-left"></div><span class="mw-headline" id="Zoophilic_pornography">Zoophilic pornography</span></h2>
but's that not surprising given what I already found. --
More on this being a server-side issue, the h2 tags are being generated differently. For example a bad one and a good one:
<h2 class="section-heading"><span class="mw-headline" id="Public_opinion">Public opinion</span></h2>
<h2 class="section-heading collapsible-heading open-block" tabindex="0" aria-haspopup="true" aria-controls="content-collapsible-block-3"><div class="mw-ui-icon mw-ui-icon-mf-expand mw-ui-icon-element mw-ui-icon-small mf-mw-ui-icon-rotate-flip indicator mw-ui-icon-flush-left"></div><span class="mw-headline" id="Zoophilic_pornography">Zoophilic pornography</span></h2>
but's that not surprising given what I already found. --
talk) (Thanks for working on this. @PrimeHunter:, do the navboxes still appear in this version of the article in RoySmith's user space? Just to know if these problems are related.--
talk) (
talk) (It has to do with size because no matter what I remove, as long as I remove enough of the page, the page parses properly on the mobile site. Both halves of the page work properly when split, 1 and 2. The edit that fixed the issue in RoySmith's userspace reduced the size of the page from 112867 bytes to 112834 bytes.
talk) (It has to do with the number of files (at least partly, maybe it as to do with the size of the file links) because all RoySmith did was remove a file to fix the issue and I removed a different file here [15] and it also fixed the issue. This revision with a lot of files [16] has the same issue aswell.
talk) (@SharabSalam: I don't know if you're following the phab ticket, but there was an interesting suggestion that using flag emojis (https://flagpedia.net/emoji) might be a possible workaround. --
talk) (It is now fixed [17]. I replaced X and question marks with emojis. Thanks everyone for your help.--
talk) (@SharabSalam:, Well, I'm glad we found something that got you up and running. Weird about not being able to access phab. If you can give me some more details (URL you're accessing, error message you get, etc) I'll see if I can figure out what's up with that. --
- The message is like this:
- It's not a big deal. It seems that it only happens when I am using my WiFi network because when I use another WiFi network or a proxy I can enter that website. If I wanted to enter that website I just use a proxy.--
Eliminating certain users' edits from your watchlist
talk) (This may have been asked before, and I'm dreadfully sorry if so, but is there a way to remove specific users' edits from your watchlist?
talk) (Something like this: let wl = document.querySelector('ul.special');
for (let li of wl.querySelectorAll('li')) {
let un = li.querySelector('bdi');
if (un.innerHTML === 'BobTheUser') { li.css.display = 'none'; }
}
(untested).--
let wl = document.querySelector('ul.special');
for (let li of wl.querySelectorAll('li')) {
let un = li.querySelector('bdi');
if (un.innerHTML === 'BobTheUser') { li.css.display = 'none'; }
}
talk) (And sorry to be a dunce, but where do I add that code please?
talk) (The code is broken in various ways, so I'd recommend against using it. (Normally, JS code can be added to Special:MyPage/common.js.) Any code to remove particular listings would need to work somewhat differently depending on what settings one is using. --
talk) (Well I'm just using standard monobook.js and would like to remove one specific disruptive user from my watchlist if that's possible.
talk) (@The Rambling Man: Specifically the relevant settings are the "Use non-Javascript interface" option in Special:Preferences#mw-prefsection-watchlist and the "Group changes by page in recent changes and watchlist" in Special:Preferences#mw-prefsection-rc. Assuming both options are at the default state (unchecked), then, using Jorm's code (slightly modified below to use the correct style keyword and to run after load) should work: $( function () {
let wl = document.querySelector('ul.special');
for (let li of wl.querySelectorAll('li')) {
let un = li.querySelector('bdi');
if (un.innerHTML === 'BobTheUser') { li.style.display = 'none'; }
}
} );
(Replace "BobTheUser" with the appropriate username to be blocked.) This should work, so long as the watchlist JS itself isn't doing anything crazy, which it might. --
$( function () {
let wl = document.querySelector('ul.special');
for (let li of wl.querySelectorAll('li')) {
let un = li.querySelector('bdi');
if (un.innerHTML === 'BobTheUser') { li.style.display = 'none'; }
}
} );
Is there a tool for this?
Cyphoidbomb (talk) 02:12, 20 March 2020Hey all, if I think that an editor is restoring a much earlier version of an article, is there a tool that could look at Entire Article A and scan back through the edit history to find all the versions (Entire Article B-Z) that match? WikiBlame doesn't quite do the trick, since it requires a specific phrase, and typically stops as soon as it finds the first instance of that phrase. Context: Let's say I think they might be evading a block, and I want to see if they're restoring a version that they manipulated under a different account at some point in the distant past. Thanks,
John of Reading (talk) 07:28, 20 March 2020@Cyphoidbomb: WikiBlame should be able to help with this. Set the "Start date" back a bit, to a date before the old version was re-instated, and tick the box "Look for removal of text". It should then find the last of the old revisions that contained your phrase. --
MusikAnimal (talk) 16:51, 20 March 2020You could also run a database query for this, if you're searching for previous revisions that exactly match a given revision in its entirety. See quarry:query/43132; this one finds all revisions of my sandbox where it was a blank page (i.e. matching the content of Special:PermaLink/944834000). Just fork that query and replace the page_title, page_namespace and rev_id accordingly (see comments). Hope this helps,
Cyphoidbomb (talk) 19:35, 20 March 2020@MusikAnimal: I think this might be exactly what I need, thank you. I gave it a whirl and ran into some issues. If I was searching an article, should the page_namespace value be zero? That seemed to work, but I wanted to double-check. Where can I find the other appropriate values for, say, Wikipedia space, Article space, draft space, etc.? Thanks!
John of Reading (talk) 19:58, 20 March 2020@Cyphoidbomb: Those numbers are listed in the box at the top right of Wikipedia:Namespace. --