Jump to content

Wikipedia:Bot requests/Archive 29

From Wikipedia, the free encyclopedia
Archive 25Archive 27Archive 28Archive 29Archive 30Archive 31Archive 35


Archive bot

I want to operate a bot that archives old discussions every month or so. Any discussions that hasn't had any response in the last 60 days will be automatically archived. Jupiter.solarsyst.comm.arm.milk.universe 19:52, 23 June 2009 (UTC)

User:MiszaBot IIIxenotalk 19:52, 23 June 2009 (UTC)

hi,how can I operate a bot that can archives articles about australia fortnightly? —Preceding unsigned comment added by 115.131.201.213 (talkcontribs)

Why would you want a bot which archives actual articles? You can see how it's changed from going to history. Look at Help:History, for help with this. - Kingpin13 (talk) 09:02, 24 June 2009 (UTC)

Categories of Prince Edward Island

List all categories of Prince Edward Island here. Don't list categories of Category:Aboriginal peoples in Atlantic Canada. Thanks! --Sniff (talk) 00:20, 23 June 2009 (UTC)

Thank you Chris G, but I want only the categories, not articles et files. (see below) Thank, Sniff (talk)

Birmingham City Council URLs

Birmingham.gov.uk URLs (of which there are many on Wikipedia) are changing. In particular:

  1. URLs in the form http://www.birmingham.gov.uk/parks.bcc will drop the four-character .bcc suffix, thus: http://www.birmingham.gov.uk/parks.
  2. URLs beginning http://www.birmingham.gov.uk/GenerateContent? (e.g. [1]) will disappear completely.

The former could be replaced by a bot. If someone can list all the latter, preferably as a CSV file, and ideally with the title from the target page, will provide equivalent permalinks in the former (suffix-free) format. We will upload revised file to our website, to demonstrate its provenance.

Thank you. BCCWebTeam (talk) 16:37, 19 June 2009 (UTC)

I've posted the CSV data you requested for the second part at User:BCCWebTeam/GenerateContent URLs.csv. Anomie 16:41, 20 June 2009 (UTC)
Thank you. That will be worked on as and when time allows. BCCWebTeam (talk) 13:25, 23 June 2009 (UTC)

Will the .bcc suffix URLs actually break or will they just redirect to suffix-less URL? (Currently the non-suffix ones redirect to have a .bcc suffix). If they will actually break, I can have DeadLinkBOT work through them.

There are also a large number of URLs in the follow formats, which may or may not need modified.

--ThaddeusB (talk) 17:28, 20 June 2009 (UTC)

The .bcc suffix will cease working; please apply the bot as you suggest. It would be good to have the other four variants you list, which will also cease working, added to the CSV (or in a separate file) please. BCCWebTeam (talk) 13:25, 23 June 2009 (UTC)

Seems like a lot of work for the .bcc files when they could just code in a simple call redirect on their end. Just saying.. - ALLSTRecho wuz here 14:24, 23 June 2009 (UTC)

I can do the replacment with my bot globally (there are about 200 links on other wikis than enwiki). But i think it would be good to wait until you have provided permalinks also for the other cases, too. So the bot has to edit each article only once.
My skript cannot read the title of pdf files. I can only create a list of all urls and the content type if thats enough - or sb. else has to create the list. Merlissimo 19:40, 24 June 2009 (UTC)

Discussion pages are regularly archived, so most links to the main discussion pages will become dead links after a while. This bot will find dead links linking to discussion pages and then fix them by linking to the correct archive pages. This will only work if the link point to a specific section. So for example if a link point to http://en.wiki.x.io/wiki/Wikipedia:Bot_requests#Message_Bot it will be fixed to http://en.wiki.x.io/wiki/Wikipedia:Bot_requests/Archive_7#Message_Bot. For more accuracy the bot will check the time the link was made and compare with the time the archived page was created. Dy yol (talk) 18:09, 24 June 2009 (UTC)

This seems like a generally good idea... except that it would change other users' comments on talk pages, which shouldn't generally be changed by other editors (or bots). That having been said, this might be a valid exception, since it comes up a lot and needing to search through archives for discussions like that isn't real easy. –Drilnoth (T • C • L) 18:40, 24 June 2009 (UTC)
It could put a comment in rather than actually modifying the user's message (e.g. "blablabla [link] blabla" becomes "blablabla [link] [correct link] blabla"). - Kingpin13 (talk) 18:43, 24 June 2009 (UTC)
ClueBot III already does this when it archives a discussion. --Chris 09:51, 25 June 2009 (UTC)

Gaia Bot

Can anyone make me a gaia jigsaw bot or just teach me how to make one

Hmm? This is the page for requesting bots for use on the English Wikipedia. I fear your request refers to a different site. - Jarry1250 [ humourousdiscuss ] 13:06, 25 June 2009 (UTC)

Unincorporated communities in Delaware

Could a bot sort articles in Category:Unincorporated communities in Delaware into Category:Unincorporated communities in New Castle County, Delaware and Category:Unincorporated communities in Sussex County, Delaware, replacing both the the parent category and the county category on the articles? There are three counties in Delaware, but as far as I know, there's only one article about an unincorporated community in Kent County, so don't worry about it. Nyttend (talk) 14:13, 25 June 2009 (UTC)

I'm assuming that this should be an easy job, BTW; it seems that you could have add this category to all pages that were in both the UiC. communities in DE category and the county category. Nyttend (talk) 03:55, 26 June 2009 (UTC)

WikiProject Journalism: Banners

I was halfway through manual adding and I then I remembered this page!! Could someone write a bot to add the WikiProject Journalism banner to all the subcategories and the pages in Category:Newspapers published in India and if its a stub, asses it as stub. The code for the banner is: {{WikiProject Journalism}}. Thanks! --Siddhant (talk) 11:54, 22 June 2009 (UTC)

No takers for this task? Interesting... --Siddhant (talk) 09:34, 25 June 2009 (UTC)
Doing... Anomie 12:31, 25 June 2009 (UTC)
Y Done (about 23 hours ago) Anomie 12:37, 26 June 2009 (UTC)

Thousands of species articles link to the IUCN Red List as a source. Unfortunately, the IUCN recently changed its organization scheme, using new URLs for everything, and they didn't leave redirects behind. (I hate it when sites do that.) So an article like Golden-browed Chat-tyrant links to the Red List page http://www.iucnredlist.org/details/49963 which used to be correct, but now gives a "page not found error". The correct page is now http://www.iucnredlist.org/details/145230/0 but the only way to find it, so far as I can tell, is to search for the common name or binomial name in their built-in search.

I was hoping someone could write a bot to do the following:

  • Look at every Wikipedia article that links to a page at iucnredlist.org
  • Follow the link, and if it's live, skip it. If it's a dead link...
  • Search for the article title via a "post" call in their search.
  • If there's a single hit, change the link in our Wikipedia article.
  • If there are more or fewer than one hit, put it in a log for human investigation.

Would someone be willing to do this? Thanks, – Quadell (talk) 12:32, 26 June 2009 (UTC)

Looks like an interesting task. Coding... --Chris 14:06, 26 June 2009 (UTC)
brfa --Chris 06:25, 27 June 2009 (UTC)
Have you tried email the domain owners about this problem? Usually webmaster have no idea when they break things and later wonder why their Google rankings when down. You might also want to try asking User:ThaddeusB owner of User:DeadLinkBOT. — Dispenser 15:12, 26 June 2009 (UTC)
You should ask de:User:Cactus26. He changed all IUCN links on dewiki at the end of the last year the beginning of this year. Merlissimo 21:15, 26 June 2009 (UTC)

One-time cleanup, {{Infobox U.S. County}}

There were a number of pages created with incorrect values for the "census yr" parameter. This caused red links that someone turned blue:

United States Census, 2005 (redirect page) (links) 
United States Census, 2006 (redirect page) (links) 
United States Census, 2007 (redirect page) (links) 
United States Census, 2008 (redirect page) (links) 
United States Census, 2009 (redirect page) (links) 
United States Census, 2004 (redirect page) (links) 
United States Census, 2002 (redirect page) (links) 

These pages never needed to exist, because there was not a census. (I cleaned up a few of these on my own.) A bot should clean this up.

  1. Find any page using Infobox US Country where the "census yr" parameter that is one of the years listed here.
  2. Switch that parameter name (not value) to "census estimate yr"
  3. After this work is done, there shouldn't be many (any?) links to these pages. It looks like most of the links came from poor use of the infobox. If there are other pages pointing to these redirects, we'll take a look at those afterward.

Thanks. Timneu22 (talk) 12:42, 22 June 2009 (UTC)

Did you mean {{Infobox U.S. County}}? (yes, you did; you used it in the section title) It seems it would be more efficient to check the 304 pages linking to each of those pages rather than to search all 3054 pages using the template. I'll see about doing this later today, unless someone beats me to it. Anomie 12:29, 26 June 2009 (UTC)
Doing... Anomie 01:53, 27 June 2009 (UTC)
Y Done Anomie 03:07, 27 June 2009 (UTC)

Thanks. Similarly, would it be possible to modify all links that point to United States Census, 2007 and make them point to United States Census Bureau instead? I'd like to delete the 2007 redirect page. Timneu22 (talk) 17:45, 27 June 2009 (UTC)

Article on Australian football team, Australia national association football team, was moved recently to the current title per request. Would it be possible to get a bot to go through the huge number of sports related articles, templates etc to change old links to the new title? Too many to do by hand, and looks messy just using the redirect. YeshuaDavidTalk23:44, 27 June 2009 (UTC)

See WP:REDIRECT#NOTBROKEN. Anomie 00:23, 28 June 2009 (UTC)

Bot to remove {{popcat}} from categories which it is not needed

I've started a discussion at Category talk:Underpopulated categories#The Underpopulated categories cat is Overpopulated! regarding how to de-populate that category. My idea is to have a bot systematically clean up that category by removing {{popcat}} from all those categories that either:

a.) already are populated with a certain numbered amount of articles, and/or

b.) have been tagged with {{popcat}} for a certain amount of time, or that fall within a certain date period (in which case a date= parameter may be needed?)

Is it possible to have a bot be able to automatically determine these things? -- œ 19:17, 27 June 2009 (UTC)

Looks pretty simple. A date parameter will probably not be needed, because the bot could determine when the template was added from the history. I assume that you want arbitrary numbers for the two things you mentioned above; what do you think they would be? The Earwig (Talk | Contribs) 19:25, 27 June 2009 (UTC)
That I'm not so sure about and would like more input from the community, but I suggest 10+ articles with the tag being in place for at least a year. -- œ 12:47, 28 June 2009 (UTC)

Accessdate formatting bot

Could we have a bot edit the accessdate parameter in templates such as {{Cite book}} from something like 2006-02-07 to something like 7 February 2006? Numbers and dashes look messy, in my opinion, and the dates used to be linked before the delinking thing, so they were never intended to look like they do now. It Is Me Here t / c 18:04, 25 June 2009 (UTC)

Agreed that the dashed format is bad, but I don't think that a bot can really do this unless it detects what date formatting style (DMY or MDY) is used in the rest of the article. –Drilnoth (T • C • L) 18:14, 25 June 2009 (UTC)
I love the pseudo-ISO format, unambiguous, multilingual, and short. If anything I'd change pages to using the pseudo-ISO format for accessdate. However, auto-formatting is actually support with {{#formatdate:}}, no idea what it isn't implemented to the citation template. — Dispenser 19:20, 25 June 2009 (UTC)
OK, in that case, if you can provide me with the exact code to edit Template:Cite web (etc.), then I shall boldly do so. It Is Me Here t / c 20:18, 25 June 2009 (UTC)
The citation templates have been combined into a single core at Template:Citation/core. From discussions their the issue seem to be somewhat complex. Non-linked autoformatting might be banned from the discussion and Arbcom ruling over linked autoformatting. Disputes over which of the three formatting should be used and that doing it automatically is not a good idea. I also remember reading a proposal to have date standardization function similar to {{DEFAULTSORT:}}, but I don't think anything come of it yet. However, one thing seems to be clear that the article should have consistent formatting. So I am wrong with my previous bot assessment.
The bot would convert from ISO if all dates are pre-2009, when the ISO format was a techinal requirement. It would also have to convert dmy and mdy to the dominate format used (lets say 80%). — Dispenser 23:34, 25 June 2009 (UTC)
I believe T19905 is related to the DEFAULTSORT-like proposal. As for the bot request, IMO it's Declined Not a good task for a bot. Anomie 11:23, 26 June 2009 (UTC)
I have created {{DEFAULTDATEFORMAT}} for use by bots until the bug gets resolved. — Dispenser 20:47, 28 June 2009 (UTC)

Poop Patrol

Hi, every now and then I patrol various parts of the pedia for particular words that are either commonly misused or an indicator of vandalism/attacks. But for those with a high proportion of false positives its a complete pain to search again a few months and have to trawl the whole lot to find a few new errors. What I would like to be able search for are for example articles containing the word "poop " but not containing the words "poop deck" or "poop cabin" and that didn't contain that word on "dd/mm/yyyy". However if that isn't too practical could I have a bit of code that lets me specify a word or phrase, whether I'm looking though article space user space or user talk and a sandbox to write it to. Provided the query refreshed the sandbox with a list of the resulting article names in square brackets and in alpha order I think I can do the rest simply by looking at what has changed since last time the query was run, though it would be even better if the list contained article links and the date they were added to the sandbox, and were in date order.

I've tried searching for "posses " and written the search to User:WereSpielChequers/posses, but they don't come up as clickable. However it does give an idea as to what I'm searching for. ϢereSpielChequers 19:25, 25 June 2009 (UTC)

The best way to scan the 'pedia quickly for complex combinations of words is to get a dump and use AWB's database scanner gizmo, I always think. Unfortunately I can only just squeeze 512mbits/sec fom the copper wiring, so I've never tried it myself. - Jarry1250 [ humourousdiscuss ] 19:39, 25 June 2009 (UTC)
Just realised I didn't actually read what you wrote. Oh well. I'll leave that ^ there anyway just in case it's handy to others. - Jarry1250 [ humourousdiscuss ] 19:40, 25 June 2009 (UTC)
Related. – Quadell (talk) 20:08, 25 June 2009 (UTC)
Thanks guys, to give a simpler example, when I search for "posses " there are over a hundred articles that legitimately contain that word, it would save a lot of faff if a bot could help me ignore the ones I've patrolled before. Is that technically possible? ϢereSpielChequers 21:30, 28 June 2009 (UTC)

NRHP banners

This is a long request, hope that it is composed properly! I would most appreciate a bot to be run to add WikiProject NRHP banner to many articles that have been created in the last year without banners, perhaps 1,000-4,000 in number, adding to the 20,000 articles in the wikiproject. This request reflects feedback on format of bot request from Wikipedia:Bot requests/Archive 28#Advance questions for a WikiProject NRHP banner placement bot.

I am hoping it would show results before July 4, so as to inform some press-release type announcements on that day or a day or two before, but will be glad to get any help. Thanks in advance! doncram (talk) 01:19, 29 June 2009 (UTC)

Part 1. Request that all articles in wikipedia whose titles begin with "National Register of Historic Places listings in" be tagged with this banner, {{WikiProject National Register of Historic Places|class=list}}, retaining existing assessments if any. Unless the article is a redirect, then do not add any banner.

Part 2. Request that all articles in the following NRHP categories be tagged with this banner, {{WikiProject National Register of Historic Places}}, retaining existing assessments if any. Unless the article is a redirect, then do not add any banner.

Extended content

Doing... Anomie 03:56, 29 June 2009 (UTC)
Thanks, and it looks basically good. But I notice for List of Registered Historic Places in Wexford County, Michigan and others, it is adding banners for redirects, contrary to the bot request. I am not sure if / how much this is a problem, as i was unsure beforehand about whether redirects should be tagged or not, so I don't expect it is worth going back to undo all of those. But I did try to ask for the bot request to exclude them. :) doncram (talk) 14:22, 29 June 2009 (UTC)
In fact, since i think the bot is halfway through the state categories lists, i think it is best for it to continue as it is, tagging the redirects that have NRHP categories. To be considered by the wikiproject as a group, not a bad thing. doncram (talk) 14:57, 29 June 2009 (UTC)
Doh! I set the bot up to skip redirects, and then I changed it around and forgot to re-set it. Anomie 17:04, 29 June 2009 (UTC)
It looks done. Total for wikiproject went from 20,400 at start to 23,372 now, adding about 700 list-articles among a total of 3,000 added. Plus about 100 redirects added (not counted as wikiproject articles). Others seem to be busy categorizing the new, unassessed ones. Thanks so very much! doncram (talk) 23:41, 30 June 2009 (UTC)

db-author and db-self bot

Is there any interest in a bot that would handle non-controversial db-author and db-self chores?

What I had in mind:

  • Check if requester is blocked, if yes, do not delete
  • Check if the page has had any kind of protection on it recently, if yes, do not delete.
  • For db-author, check to see if page has any edits by other editors, if yes, do not delete.
  • For db-user, check to see if page is in the correct user: space. If not, do not delete.
  • For both, check move history to make sure it has never been moved. If moved, do not delete.
  • Delete if okay to delete.
  • If deleted, notify the requester and give the requester instructions to shut off the bot and request restoration if there was a problem.
  • If not deleted, add a template indicating bot refused to delete. Administrators can watch the category that the template adds.
  • Log the results.

The primary issue is that this would be an admin bot. It would only be useful if the benefits of the bot outweighed the risk of a snafu and that this wasn't seen as "admin bot creep," which is always something to watch out for. Good code review should make snafus almost unheard of. davidwr/(talk)/(contribs)/(e-mail) 00:38, 30 June 2009 (UTC)

I'd be willing to devote a little bit of my time to writing this. Coding... (X! · talk)  · @092  ·  01:12, 30 June 2009 (UTC)
Good. Now all we need is consensus that this is a good idea. davidwr/(talk)/(contribs)/(e-mail) 14:28, 30 June 2009 (UTC)
Just a quick Q - is {{db-author}} ever used when no one else has edited? Normally it is added after a third party has edited the article to point out it doesn't meet our guidelines (by adding a notability template/speedy tag) and the original author realizes their error. --ThaddeusB (talk) 19:15, 30 June 2009 (UTC)
User:SDPatrolBot curretly tags pages which creators blank after they have been tagged for deletion - Kingpin13 (talk) 19:19, 30 June 2009 (UTC)
Db-author is sometimes used where db-user would also be appropriate. I would assume it's also used if an author changes his mind on his own or after being notified off-page, such as on a talk page or by email. However, your point is a good one, many db-authors will be bypassed by the proposed bot. davidwr/(talk)/(contribs)/(e-mail) 14:48, 1 July 2009 (UTC)
Yes, I agree that g7 is nearly always added by another user (a point I was trying to illustrate with my bot). But I think this could be useful in cases where a user recreates a page when tagging it for deletion (this normally happens with Twinkle), and then they are the only editor, and they quickly change to g7. - Kingpin13 (talk) 14:55, 1 July 2009 (UTC)
  • Please don't automate this. The load of {{db-author}} requests isn't very high and isn't overwhelming C:CSD, and there are often reasons to keep a page around (say, somebody else links to it or transcludes the page). Better to leave this to humans. Kusma (talk) 15:25, 1 July 2009 (UTC)
    • I agree with Kusma. The actual deletion shouldn't be automated because there are so many things that need to be checked (although automated tagging of author-blanked pages would be good). –Drilnoth (T • C • L) 16:05, 1 July 2009 (UTC)
      That's what User:SDPatrolBot does, so no need for another bot (although I don't object to there being one). Also, I think the things which would be checked are pretty good, and I the bot would probably make 0.5% mistakes, and not make the same one twice (so long as the code was updated), so we could get it pretty good. But I don't think there is a huge need for this bot (although again I really don't mind if there is), as admins keep this area well under control - Kingpin13 (talk) 16:15, 1 July 2009 (UTC)
    The bot should skip if the page is being transcluded by others. –xenotalk 16:18, 1 July 2009 (UTC)
    Good point, similar to that, if it's a category, you may want it to check if there are any pages in it - Kingpin13 (talk) 16:22, 1 July 2009 (UTC)
  • Once more, this bot is not needed. There is no problem that it solves that is worth even an (optimistic) error rate of 0.5%. In particular, the bot won't be able to check whether the incoming links indicate that the page should be kept despite the author's wishes. WP:CSD#G7 is done as a courtesy, having their page deleted is not an author's right. Kusma (talk) 16:49, 1 July 2009 (UTC)

CAS and EC Nos

Most chemicals in Wikipedia have a CAS No. listed, but most do not have the corresponding EC No.

the template:chembox, the field to fill in is "EINECS".

By using a CAS No. in the following url, e.g. http://ecb.jrc.ec.europa.eu/esis/index.php?GENRE=CASNO&ENTREE=124-38-9 where the CAS No. is 124-38-9, you can identify the corresponding EC No. and then insert it into the Wikipedia article.

Any takers?—Preceding unsigned comment added by 86.166.233.138 (talkcontribs)

This is less simple than it looks. A) is the CASNo that is listed correct (we are working on that, but the majority of the 9000 boxes is not checked), B) is the EINECS listed correct against the CASNo. I would not suggest this automated, but manual using a carefully selected database. --Dirk Beetstra T C 19:30, 2 July 2009 (UTC)
And it would be much better to get the whole list of numbers in one file for the bot to process, rather than having a bot submit 9000 queries into their website. Anomie 20:01, 2 July 2009 (UTC)
If this is done in some way (Anomie's way is thé way forward), please do this together with Wikipedia:WikiProject_Chemicals/Chembox_validation. They have access to a lot of verified data. --Dirk Beetstra T C 08:20, 3 July 2009 (UTC)

Prep element infoboxes for update of master template

Bot request no longer needed.

If anybody is interested, please complete this bot request. Add the below parameters to each element infobox, which are all in Category:Periodic table infobox templates. |namecap= |number3=

At the same time, create some values. Values for namecap will be capitalized versions of what is already in the name field. For template:Infobox copper the name value is copper so the corresponding namecap value should be Copper

Values for number3 should be the 3 digit version of what is already in the number field. For copper, the number value is already 29 and the number3 value should therefore be 029. Of course, any element that already has a three digit value in the number field should just have that value copied over to number3. Like so: |namecap=Copper |number3=029

Also, please replace spaces between words in the values given for crystal structure with plain old ascii hyphens (-). Example, cubic face centered becomes cubic-face-centered. You will find oddball values - that's fine, we know that there will be manual clean-up and will do that ourselves.

Addendum: Put .jpg at the end of each value given in the image name field. Note that not all fields have values. Once the bot run is over, replace the text at Template:Elementbox with the text at User:Mav/Sandbox/Elementbox. Project members will manually fix any oddities.

All of the above per discussion on WikiProject Elements' talk page. Thanks for any help you can provide! :) --mav (talk) 01:24, 29 June 2009 (UTC)

While you are at it, please also add |image ext=.jpg to each infobox. Thanks. --mav (talk) 01:42, 29 June 2009 (UTC)

Is there any particular reason not to use {{ucfirst:{{{name}}}}} and {{padleft:{{{number}}}|3|0}} as needed in the template code instead of creating the two extra parameters? And any reason (besides breakage for a few minutes while the bot runs) the .jpg can't just be included in the image name? Anomie 02:25, 29 June 2009 (UTC)
I was not aware of those templates. I'll test them. Avoidance of breakage was the intent of having a separate param for the extension. But having a single param is cleaner. Let's put this request on hold for now. --mav (talk) 03:13, 29 June 2009 (UTC)
Test successful. Thank you so much for the pointers! :) Addendum and strikeouts above. --mav (talk) 03:42, 29 June 2009 (UTC)
I had to strike the space to hyphen part too. Is there a way to accomplish the same thing w/o changing the values? The purpose is to create image links to images of the crystal structures (all such images have hyphens instead of spaces). --mav (talk)
No, the devs are strongly opposed to giving us useful string manipulation functions for some reason. About all you could do is a #switch to translate the "normal" values into image names. Or go back to the idea of having the bot change them. Anomie 04:02, 29 June 2009 (UTC)
The number of cases is small, so I think #switch will work nicely as project members manually fix the broken infobox parts. Thanks again! Way past time for bed - I'd like to create some cases before the bot does its thing, so if possible please hold this request until then. You have been most helpful. :) --mav (talk) 04:30, 29 June 2009 (UTC)

For what it's worth, most elements don't use Template:Elementbox. It's a tedious conversion process and it's been on my to-do list for ... awhile. --MZMcBride (talk) 04:54, 29 June 2009 (UTC)

There probably should be a bot run just to prevent all the hacks that are being suggested. Anyway a hack for images lacking .jpg extension is {{{image name}}}{{#ifexist:{{{image name|}}}.jpg|.jpg}}. — Dispenser 07:43, 29 June 2009 (UTC)
#ifexist doesn't work for images on Commons; I believe it checks for the existence of the the file description page and not the file itself, and Commons images of course have no local description page. What are the hacks you're complaining of? Surely not the use of core functions like ucfirst or padleft for the exact purpose they are intended? Anomie 12:12, 29 June 2009 (UTC)
#ifexist does work with external repos (like Commons) if you use Media: as I recall ({{#ifexist:Media:Foo.jpg|...}}). I'm not really sure I see a need for bots anywhere here. It's like thirty pages that use {{Elementbox}}.... --MZMcBride (talk) 13:38, 29 June 2009 (UTC)
"Media:" works? Nifty, I'll have to remember that. Anomie 17:02, 29 June 2009 (UTC)

Hm. Seems like the remaining conversion issues could just as easily be done with a tabbed browser due to the fact that only 30 elements use the elementbox template. Please cancel the bot request and accept my gratitude for all the great advice and pointers to improve the template. :) --mav (talk) 00:15, 30 June 2009 (UTC)

43 mainspace pages transclude {{Elementbox}}, 44 templates. Anomie 01:36, 30 June 2009 (UTC)
All fixed manually. I think all is well now - thanks again! --mav (talk) 03:40, 30 June 2009 (UTC)

bio-photo-bot

As part of a larger project to address the list of requests for photographs I would like assistance creating a bot script to sort out photo requests of people. I have no experience of Python or similar script languages. I started the task using AWB but it became clear it would take me a couple of years to complete it with that method. See User:Traveler100/bio-photo-bot for details. Traveler100 (talk) 09:06, 28 June 2009 (UTC)

OK no one interested, could someone point me in the direction of some Python source code examples of replacing text in articles so I can see if I can work this out myself.Traveler100 (talk) 13:49, 4 July 2009 (UTC)
That's not a good idea, you shouldn't be removing the tags from the WikiProject banners, they have these for a reason, you should simply add the banner later on in the page if need be. Although a bot to go though those lists and remove any that have images would be a good idea because last time I had a look all the ones I clicked on had images. Peachey88 (Talk Page · Contribs) 14:17, 4 July 2009 (UTC)
The tag in the WikiProject is no longer of any use, there are thousand are requests in each of the current titles and are mixed up with with articles from many other projects. The reason for removing and not just added a new tag is track how complete the process would be.Traveler100 (talk) 15:01, 4 July 2009 (UTC)
I don't believe a bot to remove ones that have pictures is a good idea. A bot cannot tell what a picture is of so it cannot determine if a given picture satisfies the request. For example, a biographical article may have a picture of something else, but be missing a picture of the person. I suppose it could be limited to pictures in infoboxes, but even that I think would be prone to false positives. Generating a list for human review is a better option. -- JLaTondre (talk) 14:31, 4 July 2009 (UTC)
It turns out there is already a bot who checks for this. PhotoCatBot adds articles to Category:Articles which may no longer need images if they have images. -- JLaTondre (talk) 14:44, 4 July 2009 (UTC)
there is a tool check if articles in these categories have images. You have to go through manually, which I have been. All 10,000 articles in people needed image are valid requests.
Traveler100 (talk) 14:59, 4 July 2009 (UTC)
Given PhotoCatBot is already classifying image requests (in addition to listing ones with pictures), you may wish to directly ask Twp if he wants to take this on. -- JLaTondre (talk) 15:10, 4 July 2009 (UTC)

Redirect a redirect

The previous request was archived without a response. Would it be possible to modify all links that point to United States Census, 2007 and make them point to United States Census Bureau instead? I'd like to delete the 2007 redirect page. Timneu22 (talk) 17:45, 27 June 2009 (UTC)

  • There are around 50 links that would need to be updated; it's a little tedious I know, but that's probably less than an hours plain-old editing. Is there a reason it can't be done manually ? - TB (talk) 10:40, 6 July 2009 (UTC)
    Sounds like a good task for AWB, you could ask on their request page if you like (just post at Wikipedia:AutoWikiBrowser/Tasks). Just need to fill a list from "links to article", then find and replace "[[United States Census, 2007" with "[[United States Census Bureau". Very easy :), I'd do it myself except I'm currently cutting down on my automated edits, I dislike AWB and I'm programming my own bot at the mo (nothing more enjoyable than that) :) - Kingpin13 (talk) 10:59, 6 July 2009 (UTC)
    I'll post on AWB. Thanks. Timneu22 (talk) 14:23, 7 July 2009 (UTC)

Bot to judge suitability of potential admins

As outlined at WT:RfA#WT:RfA.

This bot would take a user and look through the things which are considered in an RfA (so for example: What areas they take part in, block log, how much interaction they have with other users, how many AN/I threads there have been about them ;), what (if any) warnings they have received, etc. etc.) and then give the user a "score" of how good an admin they would make, it could also find the average score of failed/successful noms over the past month(s) and compare the user to that.

The way to do this which seems best (as we don't want the results to be public), is to have a page where users can request to be reviewed (optionally) and the bot will then email their results to the user via Special:EmailUser.

There is also talk about having a bot "fail" users before they go for RfA. It seems to be a bad idea to add this immediately, but it would be nice to have the bot save data about whether it would have failed them, if it was allowed to. - Kingpin13 (talk) 08:37, 17 June 2009 (UTC)

I guess the hardest part of this would be to work out all the metrics, if you intended to have an overall conclusion. - Jarry1250 (t, c) 09:50, 17 June 2009 (UTC)
Aye. But this should be possible through trial and improvement. By comparing results of past candidates. - Kingpin13 (talk) 10:16, 17 June 2009 (UTC)
What a horrible idea! Having sacks of bolts to pass judgement on human behaviour and hand out scores? Who in the world would have access to the results if they were not public? The small bunch of people who hangs at RfA? If the data is available upon request, who decides who has access? If it's no one, then the data is public, despite you saying otherwise. By querying the bot? Then what's to stop people from using it on non RfA candidates? I mean there's problems with the RfA process and the RfA folks, but this is something else. If you're getting too lazy to go through {{usercheck-full}} (which compiles everything you could possibly know about the user, with links to searchs at ANI and all these other alphabet soup location) or a variant of it yourself, don't !vote at RfA! Headbomb {ταλκκοντριβς – WP Physics} 13:38, 17 June 2009 (UTC)
It's for he use of potential candidates to use to gain confidence/not go for RfA and avoid the disappointment of failure. The results are emailed to the user who requests them, so to find out someone else's score you would have to hack into their account or email. You seem to think it's designed for !voters, that's not the idea. - Kingpin13 (talk) 13:44, 17 June 2009 (UTC)
Ah, well if that's the case, then disregard my previous remarks. Although I would worry about giving them false hopes of success. Headbomb {ταλκκοντριβς – WP Physics} 14:13, 17 June 2009 (UTC)
  • Comment As I stated at WT:RFA, I feel this is an excellent idea. It's not going to be a simple thing to code, though, as there are a lot of factors to figure out and weight. Enigmamsg 16:54, 17 June 2009 (UTC)
  • As a completely opt in private assessment bot, there really is nothing to object to. However, getting a reasonable accurate score would be quite a programming task. More specifically, while a number of disqualifiers can be identified (insufficient edit history, recent blocks, lack of edits it certain areas, etc), it will struggle in accurately assessing candidates that don't have any obvious problems. The reason is that assuming someone meets all the unofficial criteria, they will pass or fail based on subjective (and realistically incomplete) information gathered from their edit history. That is, the pass or fail will depend on what is unearthed about their past and how that information is viewed. --ThaddeusB (talk) 21:00, 17 June 2009 (UTC)
  • I think I had a hand in starting this idea off, there could be some truly terrible results from this or something really useful - depending on how it was setup and run. At one extreme a black boxed set of software that predicted whether or not someone would pass RFA based on the last 12 months RFAs would soon prompt questions such as What was your score on RFAbot? and of course would risk all sort of positive feedback loops as it came to influence who passed RFA. On the other hand an RFA bot that judged how a candidate fared against a set of criteria that each RFA voter could tweak would be of useful, but it would have to be parameter driven - eg a hypothetical criteria, variables marked in italics:
  1. clean block log for at least 12 months
  2. At least 300 edits last month and in at least two other months in the last 5
  3. No warnings in the last three months and show me any in the last 6 months
  4. No civility or npa warnings in the last 18 months.
  5. At least 3,000 edits disregarding 90% of minor or automated edits.

ϢereSpielChequers 18:26, 25 June 2009 (UTC)


It should be noticed that this proposal is just to determine how accurate a bot is compared to a bureaucrat in "closing" and RfA. The data would be compiled over the course of 6-12 months, and then reviewed. Whatever the outcome, the data would be useful and would also either shoot down the idea of a bot doing it or support it. ···日本穣? · Talk to Nihonjoe 23:35, 22 June 2009 (UTC)

Comment This sounds like people want a statistical analysis done of past RFA decisions, as in a logit regression analysis, or a neural net prediction model. Creating such a prediction model would be an easy analysis for anyone knowledgeable who has a statistical software package, if the candidate variables were collected in a table for a bunch of past RFA decisions, but it is not the same a running a bot! doncram (talk) 01:25, 29 June 2009 (UTC)

90% sounds like too much for minor edits, about right for automated, but I think minor edits should be about 75% - Kingpin13 (talk) 13:06, 1 July 2009 (UTC)
I think that this is an excellent idea to have a scoring system for Rfa. But I don't like the idea of E-Mailing the results, the results should be public, everything on Wikipedia is public including the discussions we are having right now! OpinionPerson (talk) 01:09, 6 July 2009 (UTC)
But do you want to have users getting opposed at RfA because, say, the bot made a mistake? This is likely to happen. I doubt someone will oppose on the bot alone, but it's likely to contribute a lot. - Kingpin13 (talk) 07:33, 6 July 2009 (UTC)
Comment Yeah, got point. But that's why bots are tested and then fully approved. If the bot is programmed correctly by an experienced programmer, it is unlikely that it will make mistakes. Further it would also only handle statistics, which are not a complicated type of data to work with, this minimizes the amount of mistakes that could be made by the bot. The bot should be programmed in a well-known language (such as PHP) and the source code should be made available to everybody. So that the code can be peer-reviewed by experienced programmers to make sure it does not contain any major mistakes. I am a PHP programmer myself and plan to become a Zend Certified Engineer at some point this year. OpinionPerson (talk) 17:05, 6 July 2009 (UTC)
I think this might be better as a Toolserver tool. Anyone would be capable of querying it to see the results, but they wouldn't be publicly displayed. That way, interested users could use the tool, as well as the candidate themselves, but it wouldn't be able to play a major role in RfAs. Having this as an email-only bot seems rather useless, and one that displays the results to everyone could be harmful (as outlined above). The Earwig (Talk | Contribs) 17:10, 6 July 2009 (UTC)
Agree, having this as a toolserver tool seems more appropriate. OpinionPerson (talk) 16:51, 7 July 2009 (UTC)
Disagree You agree that having the results public is harmful (something I agree on), yet you still want to make them public... I don't think it's going to be obvious that the bot/tool effects people's !votes, but I do think it would. Bots should only make people not have to work as hard to the point where it's not harmful - Kingpin13 (talk) 17:00, 7 July 2009 (UTC)
The point I was getting at was that a Toolserver tool is less noticeable to the average RfA !voter than a bot, but you're definitely right. This bot/tool/whatever will end up playing a large role in RfAs as long as anyone has access to it. How about we restrict the tool so that it can only be used by the candidate, but not by anyone else? (I'm thinking of Mangus's TUSC system.) It would be entirely voluntary, but may be a useful guiding force for curious wannabe-admins. I object to the idea in general, but remember: it will only be analyzing already-available data. The Earwig (Talk | Contribs) 17:23, 7 July 2009 (UTC)
I don't have a problem with it being a tool if only the candidate views it. Yes, it will only be analyzing already-available data, but it will be handing out "scores", so in the beginning stage of this at least, it would be best to keep this away from public eyes. - Kingpin13 (talk) 17:30, 7 July 2009 (UTC)
Agree - Good points. Then let's just leave it as a Toolserver tool, with only the candidate having access to viewing his/her own score. OpinionPerson (talk) 19:46, 7 July 2009 (UTC)

Redlinkbot

Hi there, i am from the runescape wiki and i was wondering if you could make a bot for me. I would like to to have the name Redlinkbot and i want it to automatically find redlinks and remove them when needed, if not needed; just remove the [[,]] tags so it becomes a normal word instead of a link.

I would really appreciate if you could do it,

82.45.113.89 (talk) 15:04, 5 July 2009 (UTC)

I'm sorry, but this is page is just for requesting bots on the English Wikipedia, not for other wikis. You may want to see if Wikia has a general bot requests page instead. –Drilnoth (T • C • L) 15:08, 5 July 2009 (UTC)
For the reason above, I have to no Decline this. However, I did do a quick search on the site, and found this page. Is that what you are looking for? There appears to be a forum where you can discuss bot concepts here. Interested Wikipedians might like to note that RSWiki's bot page links to WP:MKBOT locally, which may be a bad idea. Regards, The Earwig (Talk | Contribs) 21:40, 7 July 2009 (UTC)

Renaming of parameter in template

In the template {{Infobox UK property}}, I have renamed the parameter |imgage_name= as |image_name=. I'd like to request that a bot correct the name of the parameter in all articles where the template has been transcluded. Thanks. — Cheers, JackLee talk 15:48, 8 July 2009 (UTC)

Any idea how many there are? From this, it looks like only a few dozen. WP:AWB might do the trick. Wknight94 talk 19:34, 8 July 2009 (UTC)
The number doesn't look huge, but I'm not very familiar with WP:AWB. If you think that is a better place to go with my suggestion, I'll check it out. — Cheers, JackLee talk 05:24, 9 July 2009 (UTC)
I left a message on the AWB notice board and a user there is helping me with this request. Thanks for your advice. — Cheers, JackLee talk 05:43, 9 July 2009 (UTC)

Over the years, a lot of links have been added to baseball-related articles that link to the Baseball Reference site. That site recently changed the directory structure for all its player articles, adding a new directory level. What was previously "http://www.baseball-reference.com/s/smithjo01.shtml" is now "http://www.baseball-reference.com/players/s/smithjo01.shtml". The old links still currently work as redirects, but there's no guarantee that will always be the case. Therefore, I'd like to request that someone create a bot to fix these links across Wikipedia. -Dewelar (talk) 14:38, 8 July 2009 (UTC)

no Declined Per WP:NOTBROKEN, links shouldn't be updated unless they are actually broken. If at some future point they actually break, feel free to leave a message at User_talk:DeadLinkBOT and I will be glad to have my bot fix them. --ThaddeusB (talk) 16:13, 8 July 2009 (UTC)
Though I don't have an issue with not changing the links at this time, I would like to note that the Wikipedia:Redirect page defines a redirect as a redirecting Wikipedia page, and so the guideline regarding changing redirects does not apply in this case. If this policy were to be extended to include external sites, then there would be no effective way to take advantage of transition periods offered by the external sites who wished to change their URL structure. Isaac Lin (talk) 16:41, 8 July 2009 (UTC)
Exactly. Your reason for declining this is not supported by the policy you cite. -Dewelar (talk) 16:55, 8 July 2009 (UTC)
I think you are both missing the spirit of the law and listening to closely to the letter of the law. -Djsasso (talk) 18:46, 8 July 2009 (UTC)
So the answer is that it's better to wait until they are broken, even though we have foreknowledge that they are likely be broken at some point? That doesn't seem like a good spirit for this law to have. -Dewelar (talk) 18:53, 8 July 2009 (UTC)
I agree. How many links are we talking about? -GTBacchus(talk) 18:57, 8 July 2009 (UTC)
I did some searching, and a very rough estimate would be 10000 pages. That is a lot of edits for something that isn't even broken. --ThaddeusB (talk) 19:07, 8 July 2009 (UTC)
Yeah, that's more than I'd do by hand. -GTBacchus(talk) 15:59, 9 July 2009 (UTC)
I struck the word declined, but this remains a bad idea. Yes external links aren't mentioned specifically in NOTBROKEN (it is part of a document on correctly handling internal redirects after all), but the principal is exactly the same - it is a waste of resources to fix something that isn't broken.
More to the point, we have no reason to believe these external redirects will actually break at some point. If we knew they were going to break, then sure we could fix the problem before it happened. However, it is just as likely the the redirects will work forever as it is that they will break sometime soon. --ThaddeusB (talk) 19:01, 8 July 2009 (UTC)
I'm not sure about that, but I can understand why some might consider it a "waste of resources". It was a request, which I'd do myself if I knew anything about bots. If it's that much trouble, then forget it. -Dewelar (talk) 19:30, 8 July 2009 (UTC)
Incidentally, we do have some proof that these links may very well disappear. They changed the link structure of their minor league player pages as well, and in that case none of the old links have been kept. I wouldn't be so quick to assume these are any more likely to survive. -Dewelar (talk) 20:30, 8 July 2009 (UTC)

(outdent) I'm not sure why I'm missing the spirit of the law, as I said I didn't have an issue with not changing the links at this time. I have the same view as ThaddeusB on waiting to see if there is a known plan for the external redirects before engaging in an automated effort to change the links. Isaac Lin (talk) 19:29, 8 July 2009 (UTC)

How about a bot to change the relevant baseball-ref links to templates? Many of them already are but I gather 10,000 places are not, but should be. Something like {{Baseballstats}}... Wknight94 talk 19:32, 8 July 2009 (UTC)
Ahh yes, I had suggested that as well, and now that you point out that particular template I see it is possible to do with baseball reference, I couldn't wrap my head around doing it without them using numerical ids like the main hockey reference site does. -Djsasso (talk) 19:35, 8 July 2009 (UTC)
If that were the type of link that needs changing, you would be correct. I'm talking about links used as references within the articles, not the summary links used in the External links section. And yes, they probably number in the tens of thousands, as ThaddeusB said. -Dewelar (talk) 20:27, 8 July 2009 (UTC)
I was referring to those links as well, ie just rip the code out from one and place it in a new template. -Djsasso (talk) 20:28, 8 July 2009 (UTC)
I have no idea how to do that sort of thing, but if it would work that would be a perfectly fine alternative. -Dewelar (talk) 20:32, 8 July 2009 (UTC)
You are correct, I misread your comments. -Djsasso (talk) 19:38, 8 July 2009 (UTC)

PROD-removal notify bot

Hi there. Probably a task for an existing bot but here goes: I requested Special:AbuseFilter/200 to be created which tags edits where a prod template is removed (and not replaced by an AFD or speedy template). I think it might be a good idea to have a bot that notifies the user who placed the prod template (as opt-in or opt-out, I'm not sure here) so they can decide whether they want to pursue deletion through AFD. What do you think? Regards SoWhy 21:25, 10 July 2009 (UTC)

This sounds really good, and I think I think there is already an outstanding BRfA. NW (Talk) 00:44, 11 July 2009 (UTC)
That proves two things: a.) I have good ideas and b.) other people have them before I can articulate them. Thanks for the link :-) Regards SoWhy 06:42, 11 July 2009 (UTC)
Aye :). Anybody's input on if it should be opt-in or -out would be welcome. Although you (SoWhy) sound hazy on that. - Kingpin13 (talk) 07:24, 11 July 2009 (UTC)

Would it be possible for a bot to be created that would replace all instances of the "home" and "away" parameters in this template with "team1" and "team2"? The "homescore" and "awayscore" parameters would also need to be changed to "team1score" and "team2score". The reason for this change is that, when a match is played at a neutral venue, there is no "home team" or "away team", and this would make the template fall in line with {{footballbox}} and the cricket match summary templates. – PeeJay 11:39, 9 July 2009 (UTC)

This could better to accomplished by editing the template to allow for either "home" or "team1", either "away" or "team2" etc. --ThaddeusB (talk) 14:17, 9 July 2009 (UTC)
That is possible, and has been implemented (I hope correctly), but an eventual scenario in which "home" and "away" could be phased out would be preferable. – PeeJay 14:21, 9 July 2009 (UTC)
So any chance of this happening then? – PeeJay 20:07, 10 July 2009 (UTC)
It is generally considered a waste of resources to change something that isn't broken, so no the change probably won't happen unless it is being done as part of some larger task that would require the editing of all the affected pages anyway. --ThaddeusB (talk) 14:00, 11 July 2009 (UTC)

Message for user talk pages of Maryland and DC Wikipedians (fairly urgent)

Would a bot owner post the following message to the user talk pages of all editors who are in one of these two categories?

Thanks! -- John Broughton (♫♫) 21:29, 9 July 2009 (UTC)


Section heading for announcement:

Volunteer opportunity in Bethesda, Thursday, July 16


Body of section (announcement):

The Wikimedia Foundation will be conducting an all-day Academy at the National Institutes of Health, in Bethesda, Maryland, on Thursday, July 16. The team that will be teaching at the Academy, a mix of paid staff and volunteers, is looking for four more volunteers to be teaching assistants, providing one-to-one assistance in workshops whenever a workshop participant has a problem following the instructional directions. (We currently have two editors signed up as teaching assistants, and are looking for a total of six.)

The NIH editing workshops are only for two hours, but volunteers are asked to meet the Wikimedia Foundation team at the hotel in Bethesda at about 7:15 a.m. (time to be finalized shortly) and to stay for the entire day, which ends at 4:30 p.m. Lunch will be provided. (The full schedule can be found here.)

The team is not necessarily looking for expert editors (though they are welcome), just people who can help novices who might get stuck when trying to do some basic things. If you've been an editor for at least 3 months, and have done at least 500 edits, you probably qualify.

If you're interested, please send John Broughton an email. If you might be interested, but would like further information, please post a note on his user talk page, so that he can respond there, and others can see what was asked.

(You have received this posting because your user page indicates that you live in Maryland or DC.)


I can do this right now. --MZMcBride (talk) 02:08, 10 July 2009 (UTC)
Done. --MZMcBride (talk) 05:08, 10 July 2009 (UTC)
Thanks!! -- John Broughton (♫♫) 11:48, 10 July 2009 (UTC)
Consider using Geonotices instead of just spamming people talk pages. — Dispenser 17:35, 10 July 2009 (UTC)
Thank you for pointing that out. As the page says, however, Currently, only a watchlist geonotice exists. If there had been more time before the event (something totally out of my control, unfortunately), this would have been a viable alternative. (I typically check my watchlist every couple of weeks, and I'm sure that there are other editors who don't review it every day.) -- John Broughton (♫♫) 23:11, 10 July 2009 (UTC)
It still would be a wise idea to get it listed there. Not everyone treats there user pages like a MySpace page and I certainly didn't know about the categories before this came up. 91.19.229.151 (talk) 15:08, 11 July 2009 (UTC)

Could in principle be done by going through Category:Films based on books, checking which links on the pages are categorized as books, and then adding the category to the linked page. Could also deal with the many subcategories of Category:Films based on books by making analogous reverse categories (e.g. Category:Novels made into films). --Cybercobra (talk) 06:40, 13 July 2009 (UTC)

Bot to update WP:DABS

WP:DABS was created to monitor the size of dinosaur articles on Wikipedia. The purpose is to identify new articles, identify very short stubs, highlight potential hoax articles, monitor when an article may be becoming too long, identify articles which may be nearing comparable FA or GA length, identify articles which have already reached GA or FA, and identify articles which have been categorized incorrectly.

DABS was updated manually from December 4, 2005 to May 28, 2007. It was automated by DinoSizedBot, then later Betacommand's bot, then finally 718 Bot. It hasn't worked properly since January, but we'd like to get it up and running again, as we continue to get hoaxes and other stuff that could potentially be identified if a bot was updating the list. It's also just much easier to see which articles need the most attention using the list.

We've contacted East718, the owner of the last bot to update the list, but he's busy in real life. Is this a task someone else could take up? Firsfron of Ronchester 23:58, 12 July 2009 (UTC)

It wouldn't be too hard to do, but I for one would need more details on exactly how it works (i.e. how should the bot know which articles to include/exclude, and is the previous "transclude some other page" method preferable to having the bot update that page directly?). Anomie 00:07, 13 July 2009 (UTC)
It could either be done through categories (anything inside Category:Dinosaurs, for example, excluding the TV series and movie junk), or it could be done to any page which has a Template:WikiProject Dinosaurs template on its talk page. In the past, yes, it was done using a transclusion instead of updating the page itself. That worked very well, and could certainly be done again. Firsfron of Ronchester 00:20, 13 July 2009 (UTC)
To clarify: we wouldn't want the articles in Category:Dinosaurs in fiction (and its many sub-categories), and also excluding Category:Birds (because we don't want 15,000 bird articles mixed in), but we would basically want everything else in Category:Dinosaurs. Firsfron of Ronchester 00:37, 13 July 2009 (UTC)
I can do this.  Working -- Cobi(t|c|b) 01:08, 13 July 2009 (UTC)
That's wonderful. Thank you so much, Cobi. Firsfron of Ronchester 01:11, 13 July 2009 (UTC)
 Done - I don't think it needs a BRFA as it only edits in it's userspace (if I'm wrong, let me know and I'll gladly go through a BRFA for it). See User:ClueBot II/dino. You can transclude that to WP:DABS. It is updated every hour, hopefully. -- Cobi(t|c|b) 01:46, 13 July 2009 (UTC)
Wow, that was sooo quick. Thank you so much. You've saved WP:DINO hours and hours of work. It updates every hour? The old ones only updated daily. Also: some observations: it appears to identify redirects as articles (possibly because they were tagged by the WikiProject years ago); it also doesn't identify Featured Lists (there's just one, so it's not a huge problem). Will we need to delete the WikiProject tags from the redirect's talk page in order to get the bot to stop listing redirects? Thanks again for all your work on this. Firsfron of Ronchester 02:04, 13 July 2009 (UTC)
It grabs all pages with talk pages which have the WikiProject template on it. So, if the redirects' talk pages were redirected as well (they should be, I think), it would no longer list those. -- Cobi(t|c|b) 08:06, 13 July 2009 (UTC)
Thanks; we can do that. Can you have the bot recognize Featured Lists? Firsfron of Ronchester 13:36, 13 July 2009 (UTC)
As long as it's not disruptive in some manner (e.g. something that would make Domas kill you), no BRFA is needed for editing only the bot's userspace. Anomie 02:10, 13 July 2009 (UTC)
Heh - it's two POSTs (login and post content) and two GETs (check /Run and to get an edit token. The rest is a 2 second MySQL query, so if anybody's going to kill me, it's River :) -- Cobi(t|c|b) 07:36, 13 July 2009 (UTC)

remove direct use of Category:Fungus stubs

This category aparently has over 1000 direct uses, many of which have a stub template for some sub-category. I think a bot should go through all pages in the category, remove all direct use of the category, and if any page doesn't have a template stub for this category or one of its subcats - add a {{fungus-stub}} tag. Thewre are 6 legitimate tags for ths category+its subcats - {{fungus-stub}}, {{Mycologist-stub}}, {{Ascomycota-stub}}, {{Dothideomycetes-stub}}, {{Basidiomycota-stub}} and {{yeast-stub}}. עוד מישהו Od Mishehu 23:09, 13 July 2009 (UTC)

Wikipedia is rife with markup such as the following [[topic]]s which generates markup where the link text is followed with meaningless part-words such as "s", "ed", "ing". I assume that such markup presents an accessibility hazard because speech synthesizer users hear a segment of garbage output read aloud when the synthesizer encounters the word fragment after the link text. (Feedback from speech synthesizer users, anyone?) At the very least it seems fair to suggest that this is a potential accessibility problem.

I'd like to ask for comments about the feasibility of implementing a bot that carries out a set of substitutions of the form [[topic]]suffix to [[topic|topicsuffix]], for example [[topic]]s to [[topic|topics]], and the same for ed ing and any others that can be found and which can be guaranteed to be same.

Is there an existing bot that simply carries out general regular-expression-type substitutions?CecilWard (talk) 10:17, 14 July 2009 (UTC)

The correct solution is to preprocess the text for reading by the screen reader, not making it hard for sighted and blind editors to read the markup. I believe there have been a few scripts developed by college classes to improving the reading of discussions, so you may want to get in contact with them. Now were we could improve accessibility is to improve alt text. Images tend to be improperly captioned and don't have suitable alt text. Math markup is another, now that it to supports alt text. And next time post about accessibility issues at Wikipedia:WikiProject Accessibility. — Dispenser 11:52, 14 July 2009 (UTC)
AWB, in fact, has an option to do exactly the opposite of what you propose. The whole point of [[topic]]s is that MediaWiki automagically treats it as if it were [[topic|topics]] when rendering the page, like this: "topics".
Or, if you're complaining about editing by editors using screen readers, I suspect they have bigger problems with wikitext than this. Anomie 17:17, 14 July 2009 (UTC)

Foundation and similar dates, in infoboxes

[I'm relisting this August 2008 request (including subsequent revisions), as the editor who said he would make the edits has not done so, nor replied to many enquiries as to progress (due at least in part to understandable family matters). Since there are hundreds of templates in need of this overdue change, and this currently emitting broken microformats, the need may be considered pressing]

To add "founded", "founded date" or "foundation", "released", "first broadcast" and similar dates to an infobox' hCard microformat, existing dates need to be converted to use {{Start date}}.

I've compiled a list of relevant infoboxes at User:Pigsonthewing/to-do#Date conversions.

Thank you. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 23:01, 17 July 2009 (UTC)

Regex needed

I've been trying to make a regex for User:Chrisbot but just can't manage; could someone help me? The regex needs to change "CONTl" to "CHRISBOT" but not any other of the CONT icons (ie. not(t|ex|ext|u|ut|ue|utex)CONTl to (t|ex|ext|u|ut|ue|utex)CHRISBOT). I accidentally came upon this problem when changing tCONTl to CHRISBOT and unexpectantly changed utCONTl to uCHRISBOT as well (diff). I'll soon be on holiday but I'll look here asap. ChrisDHDR 20:30, 17 July 2009 (UTC)

The regex character \b is very useful in situations like this. If you ask it to change "\bCONTl" into "CHRISBOT", it will ignore situations where CONTl appears immediately after an alphanumeric character (a letter or number), but will replace it where CONTl appears after any other character, such as whitespace, [ and :. For example:
  • CONTl will turn into CHRISBOT.
  • utCONTl will stay the same.
  • [[File:CONTl]] will turn into [[File:CHRISBOT]].
  • [[File:utCONTl]] will stay the same.
  • ut CONTl will turn into ut CHRISBOT.
That should be good enough for your purpose. The Earwig (Talk | Contribs) 01:59, 18 July 2009 (UTC)
Thanks, that's just what I needed. ChrisDHDR 15:13, 18 July 2009 (UTC)

Sortname template

We're looking at making lists of winners of cycling races sortable so readers can view lists by year, nationality, name or team. We only need to make small edits to enable much of this. However, names won't currently sort on last name. So we'd like to changed all the standard wikilinked names into {{sortname}}.

The lists are created using templates. They're currently transcluded into 252 articles so would be a lot of work of editing - so maybe a bot could help? The names are in the name and name2 fields of {{cycling past winner rider}} and {{cycling past winner rider2}}

For just First Last names, we'd need the following changes:

  • [[First Last]] > {{sortname|First|Last}}
  • [[First Last (cyclist)|First Last]] > {{sortname|First|Last|First Last (cyclist)}}
  • [[First Lást]] > {{sortname|First|Lást| |Last, First}}

If there is First smth Last, I'm not sure how this could be done, but maybe the bot could look at the {{DEFAULTSORT}} key on the target pages?

At the same time, can all instance of the deprecated fields of teamnat= and teamnatvar= be removed.

List of articles to edit

Thanks in advance for your help! SeveroTC 19:26, 18 July 2009 (UTC)

Coordinates is articles using Infobox Military Structure

Articles using {{Infobox Military Structure}}, which have coordinates in that infobox as latitude, longitude parameters, and a {{coord}} template, need to have thw latter removed, as it causes an overlay of coordinates, as seen here. Thank you. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 10:25, 19 July 2009 (UTC)

remove direct use of Category:Fungus stubs

This category aparently has over 1000 direct uses, many of which have a stub template for some sub-category. I think a bot should go through all pages in the category, remove all direct use of the category, and if any page doesn't have a template stub for this category or one of its subcats - add a {{fungus-stub}} tag. Thewre are 6 legitimate tags for ths category+its subcats - {{fungus-stub}}, {{Mycologist-stub}}, {{Ascomycota-stub}}, {{Dothideomycetes-stub}}, {{Basidiomycota-stub}} and {{yeast-stub}}. עוד מישהו Od Mishehu 07:51, 21 July 2009 (UTC)

I was actually considering doing this. I will submit a bot request after some discussion with the WikiProject to make sure I'm not missing anything. J Milburn (talk) 20:31, 21 July 2009 (UTC)
Requested. See Wikipedia:Bots/Requests for approval/J Milburn Bot 3. J Milburn (talk) 20:49, 21 July 2009 (UTC)

WikiProject Software

Distrubute the following messages to all members of WikiProject Software and all its departments:

For better and faster discussion between WikiProject Software Members a IRC channel has been created: irc://irc.freenode.net/WikiProject-Software. For instant access click here: http://webchat.freenode.net/?channels=WikiProject-Software. Please your Wikipedia nickname. You are receiving this message because you are a member of WikiProject Software --Tyw7  (Talk • Contributions) 17:47, 21 July 2009 (UTC)

Have you considered doing this with AWB? All that needs to be done is create a list from category , select List>Convert to talkpages. Remove the non-usertalk pages (the templates and Wikipedia space pages), deal with the pages where the category is on a subpage, and then append your message. If you like, I can do this with AWB, or you could request approval, and then do it personally. Cheers - Kingpin13 (talk) 18:06, 21 July 2009 (UTC)
I would like this. Can you dod it for me? --Tyw7  (Talk • Contributions) 19:54, 21 July 2009 (UTC)
Try asking at Wikipedia:AWB/Tasks if you prefer not to do it yourself. --ThaddeusB (talk) 20:25, 21 July 2009 (UTC)

 Doing... - Kingpin13 (talk) 22:01, 21 July 2009 (UTC)

 Done - Kingpin13 (talk) 22:07, 21 July 2009 (UTC)

Category work

The National Register of Historic Places Wikiproject has begun a process to categorise all articles in Category:Historic districts in the United States into state-level categories. This is impeded by the fact that all articles with {{infobox nrhp}} that are marked as historic districts [HDs] in the infobox are automatically placed into the nationwide category, so we want to remove that function; however, if we do that, most HDs won't be in any HD category. Could we have a bot add the category (i.e. just placing the text "[[Category:Historic districts in the United States]]") to most HD articles, so that we can remove the function without these articles losing the HD category? In short, adding this category will enable us to distinguish which articles need to be recategorised. I'm specifically requesting that this category be added to pages that fit the following criteria:

  • All pages that have {{infobox nrhp}} and include either "| nrhp_type = hd" or "| nrhp_type = nhld"
  • Only those pages not already in a subcategory of the nationwide category: since those have already been recategorised, we don't need to mark them as yet in need of maintenance

I do not ask for the bot to sort any articles into categories itself: this request is only intended to make human sorting easier. Nyttend (talk) 02:46, 15 July 2009 (UTC)

Q: would you object to the bot sorting some cases into the appropriate state? I would think if one & only one state is mentioned in say the first two sentences of the lead that it would always be the correct state. --ThaddeusB (talk) 03:28, 15 July 2009 (UTC)
Our main concern is sorting by state, so surely if you allowed us to skip a step, it would be fine. Just as long as the bot doesn't oversort, such as by adding articles already in Category:Historic districts in Cincinnati, Ohio into Category:Historic districts in Ohio. Nyttend (talk) 14:32, 15 July 2009 (UTC)
Please be careful, however, to ensure that it's just the ones in one state: there are a few HDs that span state lines, so it wouldn't be good for the bot to do anything with them except adding the nationwide category. Nyttend (talk) 14:33, 15 July 2009 (UTC)
Since no one else has indicated any interest in doing this, I will go ahead and put together some code within the next few days. --ThaddeusB (talk) 18:16, 21 July 2009 (UTC)
Thanks! Nyttend (talk) 16:31, 22 July 2009 (UTC)

Could a bot run through this category and remove any |class= or |importance= designations on any Category, Template, or File talk pages that are tagged with {{WikiProject Video games}}. Thanks, MrKIA11 (talk) 19:03, 11 July 2009 (UTC)

Wouldn't it be easier to just have the template basically ignore specifications of "class=category", "class=template", "class=file", and "importance=NA"? Anomie 20:36, 11 July 2009 (UTC)
It uses {{WPBannerMeta}}, so no, I don't think so. Plus, it's only ~700 pages. MrKIA11 (talk) 21:13, 11 July 2009 (UTC)
Is this really needed? Seems kind of pointless to remove empty parameters, and parameters which are used on such pages will most likely be correct anyway. –Drilnoth (T • C • L) 20:22, 12 July 2009 (UTC)
No, that's the whole point. The project uses Template-, Category-, & File-Class designations, so all of those pages are incorrect. MrKIA11 (talk) 12:44, 13 July 2009 (UTC)
So... MrKIA11 (talk) 14:38, 15 July 2009 (UTC)

A user was nice enough to take hours out of their time to manually update the pages that a bot could have done in minutes. I didn't realize that this was so outrageously complicated to program that I could not even get a response. Thanks MrKIA11 (talk) 12:42, 20 July 2009 (UTC)

Next time feel free to drop by my talk page. I don't keep a close eye here. –xenotalk 01:38, 23 July 2009 (UTC)

I'd like a bot that would move images from Category:Cc-by-sa-3.0,2.5,2.0,1.0 images to Commons with a maintenance category of the bot's work. To ensure that the images are not copyvios it would be best to check the EXIF data on each image. If the image doesn't have one then it would need review, for the rest though, they should be transferred over there. Thank you, --Diaa abdelmoneim (talk) 09:04, 23 July 2009 (UTC)

South American football articles.

As per the consensus at Wikipedia talk:WikiProject Football#Formal petition to change the naming conventions, could someone create a bot to move any article with a year in Category:Recopa Sudamericana and Category:Copa Libertadores de América so that the year appears in the front instead of the end, example: move Recopa Sudamericana 1990 to 1990 Recopa Sudamericana and Copa Libertadores 1960 to 1960 Copa Libertadores. Thanks. Digirami (talk) 22:46, 21 July 2009 (UTC)

Y Done — Just those two categories, no subcategories as consensus didn't seem to be clear for any other articles. — madman bum and angel 03:29, 22 July 2009 (UTC)

Change parameters to lowercase

I updated {{Cite music release notes}} and changed all the parameters to include lowercase letters only, as per infobox MOS. There are over 40 articles transcluding the template that need to be fixed so a bot would be very helpful here. Thanks. –Dream out loud (talk) 02:58, 24 July 2009 (UTC)

English football competition seasons

Per the consensus at Wikipedia talk:WikiProject Football#Formal petition to change the naming conventions, could someone please create a bot to move the articles in Category:FA Cup seasons, Category:Football Conference seasons, Category:Football League Cup seasons and Category:The Football League seasons so that the years are at the beginning. For example, FA Cup 1871–72 should be moved to 1871–72 FA Cup, Football Conference 1994–95 to 1994–95 Football Conference, Football League Cup 2007–08 to 2007–08 Football League Cup and The Football League 1974–75 to 1974–75 Football League. – PeeJay 22:11, 20 July 2009 (UTC)

BRFA filed: Wikipedia:Bots/Requests for approval/MadmanBot 7madman bum and angel 03:59, 22 July 2009 (UTC)
Please confirm the moves listed thereon. — madman bum and angel 05:05, 22 July 2009 (UTC)
I have edited the target pages on that page. All of the targets should be OK now. – PeeJay 07:28, 22 July 2009 (UTC)
I edited the regular expression and it resulted in the exact same list. I'm going to update the source code then request BAG assistance to get the task speedily approved. — madman bum and angel 23:45, 22 July 2009 (UTC)
Doing...madman bum and angel 02:50, 24 July 2009 (UTC)
Y Done – Please confirm the categories' contents. — madman bum and angel 03:26, 24 July 2009 (UTC)
Looks perfect. Thanks for the help. I may be back at some future date to request the moves of other countries' competition articles. – PeeJay 10:17, 24 July 2009 (UTC)

Adding pictures to articles

Could we have a bot that creates a list of articles lacking pictures for topics where there are articles on other language wikis with pictures from commons? I'm thinking of something like the redlink lists, lists of:


Article without picture - Article on same topic with picture

With section breaks every forty rows or so to make the file usable. Then anyone could delete the row if they've copied the photo over, strike through and give a reason if the photo is somehow not right for EN Wiki or alternatively leave a comment along the lines of probably needs a German speaker to fathom out why they find that picture relevant to the subject. We'd probably also need a way to attribute the choice of photo, I'm wondering if that would be OK with an edit summary of add photo from same article on DE:Wiki. ϢereSpielChequers 11:22, 24 July 2009 (UTC)

There are more than 4000 pages with direct links to PubMed abstracts. These should be converted to citations using citation templates with the |pmid= parameter. From Template:Cite_journal/doc, "If a DOI or PMID is available, the URL should only be specified if it would point to a different page to that which a DOI or PMID would redirect to." There are existing tools like diberri's template filler that output a properly formatted citation using {{cite journal}} for a given PMID. For existing citations that include a direct url to a PubMed abstract, the URL should be removed and the PMID added in the |pmid= parameter if it's not there already.  —Chris Capoccia TC 08:36, 23 July 2009 (UTC)

Possible Possible – however, those search results already include a lot of articles already using the citation template with the pmid template. — madman bum and angel 03:42, 24 July 2009 (UTC)
yes. maybe i didn't say this so clearly, but for those cases, the url should be removed.  —Chris Capoccia TC 04:38, 24 July 2009 (UTC)
Ah, I see. I didn't quite get that because diberri's template filler to which you referred uses the URL. Still possible; I'll consider it shortly. — madman bum and angel 05:13, 24 July 2009 (UTC)
right, it uses a URL by default, but it won't ever use a URL to point to the PubMed abstract.  —Chris Capoccia TC 06:50, 24 July 2009 (UTC)
Yes, I see what you mean. Coding.... — madman bum and angel 21:38, 24 July 2009 (UTC)
There are specifically 3457 external links to PubMed. My bot is now assessing how many of those can be removed by removing the redundant url parameter from a transclusion with a pmid parameter. — madman bum and angel 01:59, 25 July 2009 (UTC)
The bot wrote 1082 diffs before crashing due to a loss of network connection. I think that task by itself is worthwhile enough that I'm going to file a BRFA for that alone. — madman bum and angel 16:55, 25 July 2009 (UTC)

BRFA filed: Wikipedia:Bots/Requests for approval/MadmanBot 8madman bum and angel 05:12, 26 July 2009 (UTC)

American film taskforce tagging

Continued at User talk:Xenobot Mk V/requests#WP:USFILMS
The following discussion has been closed. Please do not modify it.

Requesting a bot to tag the talk pages of all films in the category Category:American films with the task-force tag parameter of |American-task-force=yes within the {{Film}} tag. There could be a handful of American films that either a} don't have a talk page or b} don't have the film tag, so can these be tagged with {{Film|American-task-force=yes}}? The American cinema task-force was setup a few months back, but most films don't have the talkpage tagged. Let me know if you need any more info/explanation. Thanks! Lugnuts (talk) 18:28, 15 July 2009 (UTC)

It may be worth noting that there are articles with {{Film}} templates that lack parameters, so they would just need |American-task-force=yes added as a parameter. —Erik (talkcontrib) 18:33, 15 July 2009 (UTC)
For articles that do not already have the {{Film}} tag on their talk page, can they be tagged with {{Film|class=|auto=yes|American-task-force=yes}}? In addition, could any articles be given a Stub-Class assessment if they use one or more stub templates? PC78 (talk) 18:06, 16 July 2009 (UTC)
I'd like to request a brief hiatus on this request for a week - I had one which was nearly in process a few months back, but was far more comprehensive wrt the task force's scope than just the above category. I'd be happy to compile a better listing of categories next week, which collectively would cover the task force better. Girolamo Savonarola (talk) 23:47, 16 July 2009 (UTC)
Any update/progress with this, GS? Thanks. Lugnuts (talk) 18:13, 22 July 2009 (UTC)
Almost there...User:Girolamo Savonarola/uscats is nearly complete, but there's a lot of Directed by categories. Probably done tonight or tomorrow. Girolamo Savonarola (talk) 20:31, 22 July 2009 (UTC)
Thanks! Lugnuts (talk) 13:49, 23 July 2009 (UTC)
(Outdent) Okay, done compiling the category list. This page contains a more comprehensive listing of each and every individual category for which all of its articles should be tagged. Please don't tag sub-categories - this listing was created in order to avoid substantial quantities of mistags that would occur otherwise. I left it as a common-text line-by-line listing, so while it may appear ugly in wiki, it should be maximally flexible however one wishes to format it. Please let me know if there are any questions or concerns. Otherwise, a very very hearty thanks in advance! :) Girolamo Savonarola (talk) 22:36, 23 July 2009 (UTC)
Do you guys already have a bot-op lined up? It seems I'm the only one running WikiProject tagging tasks anymore. Anyone want to learn? It's quite easy. I've already written some instructions for TonyTheTiger here: User talk:Xeno#Chicago tagging. –xenotalk 13:39, 24 July 2009 (UTC)
Nope. Nor can I do AWB. (The amount of aggregate wiki work that is being lost to that tool being non-agnostic boggles my mind.) Girolamo Savonarola (talk) 18:53, 24 July 2009 (UTC)
Thanks for compiling the list, GS. Can someone please post on my talkpage before this goes live, so I can turn off bod-edits from my watchlist? TIA. Lugnuts (talk) 16:22, 25 July 2009 (UTC)

WP:EASTENDERS listas parameter

We would like a bot to add a listas= parameter to the template {{EastEnders project class}} on our article talk pages, which can be found in Category:EastEnders and its subcategories. The majority of articles already use defaultsort so it shouldn't be much of a problem for a bot to pick this up and add it to the talk page. AnemoneProjectors (talk) 19:05, 19 July 2009 (UTC)

BRFA filed (X! · talk)  · @304  ·  06:17, 22 July 2009 (UTC)
Doing... (X! · talk)  · @247  ·  04:56, 29 July 2009 (UTC)
Thanks for this but... the bot is just adding the 'listas=' part and not the 'Last, First' part. AnemoneProjectors (talk) 09:16, 29 July 2009 (UTC)
Oh, it's supposed to add the Last, First part to the listas parameter. You didn't say that. ;) (X! · talk)  · @881  ·  20:08, 29 July 2009 (UTC)
Sorry :) Thought the mention of the defaultsort would get that bit across. I couldn't think of the right words! AnemoneProjectors (talk) 21:16, 29 July 2009 (UTC)

Running the PR script automatically.

Currently, Ruhrfisch is running the peer review script in a semi-automatic way every night. There is obviously no good reason that this could not be a fully automated process, and Ruhrfisch would appreciate being able to do something else with his time. This would also be a good opportunity to tweak the PR script and fix some of its known bugs. Headbomb {ταλκκοντριβς – WP Physics} 05:16, 25 July 2009 (UTC)

Edit, I forgot to mention that there was talk of extending the PR script to cover FAC/FAR/FLC/FLR/GAN/GAR and any other process involved some form of article review. The argument was that at worst it can't hurt, and that it'll give some form of initial feedback to the nominators so they can get ideas to improve the article while waiting for meat with eyes to give feedback. It would be a good idea to develop the bot with this possibility in mind.Headbomb {ταλκκοντριβς – WP Physics} 13:39, 25 July 2009 (UTC)
My sincere thanks to Headbomb for requesting this. I use the script to generate the semi-automated peer reviews (SAPR) each night and then paste the SAPRs into the proper monthly archive (currently WP:PRA/JL09). Then several hours later Peer review bot (which is run by Carl) links the SAPR for each article to its peer review page. I will notify Carl, as well as AndyZ (who wrote the script originally but is now inactive). AndyZ's talk page has some suggestions for improvements to the script - the major one that comes to mind is fixing it so it recognizes the new File name and does not give the "needs more images" text. Ruhrfisch ><>°° 12:35, 25 July 2009 (UTC)
I would keep the SAPRs for Peer review at the current archive, but if they are also run for FAC, FLC and the rest I would paste them into the talk page for each FAC/FAR/FLC/FLR. Not sure where to put them for GAN/GAR (these are also on their own page now too, so maybe just there?). The current SAPR archives (for peer review) are already quite large each month and I am afraid they would be too big if all of the others were put there too. I also use the number of SAPRs each month for the PR statistic for WP:FAS, so keeping the current archives for just PR SAPRs keeps that clear too. Ruhrfisch ><>°° 14:53, 25 July 2009 (UTC)
I looked into this some time ago. My memory is that the PR script just runs a large list of regular expression against the article, and generates a review from that. Unfortunately it is written in javascript, so running it automatically either requires making a harness in javascript or rewriting the long PR script in some other language. The rewrite would not be technically difficult, just very labor intensive. — Carl (CBM · talk) 13:08, 25 July 2009 (UTC)
I've implemented a framework for run on the PR script on the Toolserver. Users tend to like the on demand stuff since they can play with it, for better or worse. — Dispenser 17:46, 25 July 2009 (UTC)
That's a great idea. If people like that, we could simply link to it in the peer review header for each article, and not have to save the auto peer review on WP at all. That would mean less work for Ruhrfisch and one also mean I can retire the corresponding bot task from peerreviewbot. — Carl (CBM · talk) 20:52, 25 July 2009 (UTC)
That's fantastic! This way the PR gets updated as the article gets updated!Headbomb {ταλκκοντριβς – WP Physics} 22:12, 25 July 2009 (UTC)
Great - could the image problem be fixed? The script looks for the old "Image:Foo.jpg" file name and is fooled by the "File:Foo.jpg" and similar names into giving the "this article needs more images" message. Ruhrfisch ><>°° 21:04, 26 July 2009 (UTC)
The Toolserver page is really only a stub implementation. It include the basics the PR script (directly from the WMF servers) and page text with test code for interface. Because it running on a different domain, AJAX calls wont work which mean some of the cool features such as image review and (limited) spell checking wont work. Because the script is loaded directly it means they it will receive all the update/problems of the official script. Answering your question, your going to have to ask the maintainers to fix it for you. — Dispenser 22:39, 29 July 2009 (UTC)
Thanks - the problem is that AndyZ wrote the script and gave me access to the AZPR account, but has not edited here in just over a year. I emailed him about this, but have gotten no response. I think I can edit the message about images to mention the problem, but I cannot fix the script. Is there anywhere I can ask that someone could fix this? Ruhrfisch ><>°° 23:33, 29 July 2009 (UTC)
You can try an {{edit protected}} on the script's talk page, but looking at that page Gary King has offered to maintain the script at User:Gary King/peer reviewer.js. — Dispenser 11:36, 30 July 2009 (UTC)
Thanks very much - I actually do not watch the script or its talk page, so I completely missed that. I will ask Gary King. Ruhrfisch ><>°° 03:26, 31 July 2009 (UTC)