Recent news stories have covered the announcement of ChatNannies, conversational robots designed to locate and lure pedophiles in Internet chat rooms. Uncritical articles were recently published by New Scientist, BBC News, News.com, the skeptically-minded The Register, and many other news outlets.
Cameron Marlow managed to secure an exclusive interview with one of the “Nanniebots,” where he posted a complete transcript and some brief analysis. To anyone who knows anything about chatterbots and the history of artificial intelligence, the transcript represents either a revolutionary leap in technology or it’s clearly a human behind the keyboard.
So I did a little Usenet research on Jim Wightman, the 30-year-old UK-based software developer behind ChatNannies. His background is mostly in .NET and VB development, but I couldn’t find references to any interest or experience in artificial intelligence or chat technologies. One thread from 2002 critically discusses Jim’s claims of developing his own private Usenet newsreader software, even though his headers revealed he was using another popular newsreader. In another thread from mid-2002, he claimed to be writing a book on .NET for Wrox Press, but there’s no reference to him on their site. Update: I can confirm that Wightman was telling the truth, and was contributing to a book about ASP.NET controls that was later abandoned by Wrox UK through no fault of his own.
In Cameron’s comments, Jim himself stated that he has some psychological problems. But a bit more worrisome are his postings in alt.revisionism, a newsgroup largely centered around Holocaust denial. Someone posting under the pseudonym “Deaths Head,” with the exact same headers and IP address as Jim, regularly argued that the Holocaust never occurred. In another thread, he posted a graphic death threat to another user. (I can’t prove beyond the shadow of a doubt that these aren’t spoofed headers, but it seems very unlikely. Compare Jim’s message headers in alt.revisionism with a message by “Deaths Head” in the same group a couple days before.) Update: Jim only posted in the group for two months. He maintains he was only playing Devil’s advocate, and that these aren’t strongly held beliefs.
These personal beliefs suggest a skewed view of the universe, perhaps one in which he’s able to unleash 100,000 cutting-edge robots to save the world from pedophiles. The ChatNannies’ official Downloads page says the downloadable version of control panel for their chatbots will be available on April 2, conveniently one day after April Fool’s Day. So it’s either the prelude to an elaborate April Fool’s joke, an attempt at defrauding corporate sponsors and individuals of their money, or it’s the delusions of an arguably unstable person. Either way, the media bought it wholesale.
March 24, 2004: Things are getting interesting! In the comments on Cam’s site, Guardian UK science columnist Ben Goldacre mentions that Jim offered to give an in-person demonstration of the ChatNannies bot on an isolated, non-networked computer with an independent third-party observer. (I’m guessing Jim will conveniently schedule the demonstration just in time for April Fool’s Day.)
March 25, 2004: Ben Goldacre’s appropriately-titled “Bad Science” column covers the Nanniebots, and Jim Wightman’s attempts to avoid getting debunked. Essential reading. Also, a reader sent in pointers to several death threats that Jim made on Usenet, which are considerably more violent than the one I found. Read them here and here.
March 26, 2004: The Register and VNUnet both ran new articles about increased skepticism of the Wightmans’ continued claims. ChatNannies started a developers’ forum. And a commenter points to TivoMedia.org, another Jim Wightman project that appeared to be unfounded, unraveling in this long Tivo Community thread from April 2003.
March 31, 2004: Shortly after midnight on April 1 in the UK, the ChatNannies.com site went offline. Never mind, it’s back.
April 2, 2004: In the “News” sidebar on the ChatNannies homepage, Jim Wightman announces he’s closing down the site in a week. Here’s the announcement, removed a couple minutes ago:
“Due to being treated like criminals for trying to help save children, we are closing the ChatNannies website at 00:00:00 GMT 11-04-2004. Many thanks to those of you who have shown your support…after this time however your Children are once again at Maximum risk from Paedophiles. You can thank, amongst others, Ben Goldacre from the Guardian and Barnardo’s in helping us reach this decision.”
He’s referring to Ben’s new Bad Science column and Barnardo’s public statement about Chatnannies.
April 5, 2004: The Guardian UK wrote a new article about the recent criticism from children’s charities. (Strangely, the article doesn’t refer to the bots at all.) Wightman posted a response. VNUnet posted a similar article.
The Chatnannies discussion forum is getting interesting, as well. Jim Wightman is posting actively there.
April 8, 2004: New Scientist removed the text of the original story, replacing it with a temporary retraction. “Serious doubts have been brought to our attention about this story. Consequently, we have removed it while we investigate its veracity.” Good! BBC News also removed their story
April 13, 2004: Charles Arthur, the technology editor of the Independent UK, wrote a good column about the net’s skepticism.
June 16, 2004: New Scientist followed up with an amusing story of Wightman’s attempt to demonstrate the ChatNannies bot from his home. He couldn’t reproduce any of the intelligent conversation originally demonstrated in his transcripts, and analysis of the transcripts shows word-for-word dialogue from the freely-available ALICE bot, including typos from the AIML database. Their conclusion (and mine): still a big, fat fake.
June 17, 2004: Andy Pryke, one of the three observers of the Nanniebots demo for New Scientist, wrote up his observations and posted transcripts of the chats. It’s clearly an ALICE bot.
For what it’s worth, IIRC, it broke in New Scientist. I’m a subscriber to that magazine, but there’s no way I’d buy it for their coverage of internet-related stories — their journalists in that department are seriously lacking a clue.
I wouldn’t be surprised if they’ve been hoaxed, and the other articles are due to Reuters republishing their content…
Yes, the Nanniebot story was first mentioned in New Scientist. The shocking part is that no other publication questioned the legitimacy of his claims… Are the fact-checkers all on strike?
The software that will supposedly be available for download is LiveNannie, not the chatbot. This page explains (at the bottom) that it’s simply for controlling chat room access on a single PC. Of course, given the BS about encryption on this page, I’m not sure this guy knows what he’s talking about in general.
Thanks for the tip, I corrected my entry. Regardless, it’s their first public software release which would immediately validate or discredit their claims. The timing is very suspicious.
This guy’s claim might be a bit more believable if he could get his ChatNannies software to sound even half as robotlike as his holocaust denials: “…Under duress….Under duress….Under duress….”
There should be no question as to the identity of Jim Wightman and “Deaths Head”- note his sign-off found here:
“Jim (Totenkopf/Death’s Head)”
Jim Wightman- Jim White Man?
Very odd.
I agree that the whole business seems fishy, but what do Obsessive Compulsive Disorder and Social Phobia have to do with it? Social Phobia is little more than (very painful and potentially crippling) self-consciousness and anxiety over being around people. OCD (greatly simplified) is when a person has recurring and distressing thoughts or impulses (called obsessions, which commonly relate to fear of infection or harming others) which are dealt with through strange rituals (called compulsions, such as washing or arranging objects). Neither is likely to prompt a person to try to scam people (although someone with OCD might be plagued with fears that they might without meaning to!).
Great investigative journalism, well done. Since the story first arose, I’ve also noticed that the ChatNannies website has removed some details and claims, including how his system needed a cluster of Dell servers with terabytes of (disk? RAM?). I can’t prove that those pages were there but I did see them – as well as a paragraph encouraging academics to ask about the technology.
I wrote a brief piece here: http://caseyporn.com/blog/archives/000087.html
Yawn.
Investigative journalism? Hehe.
I _could_ take your ‘piece’ apart point by point but frankly, its pointless: I bet you use newsgroups a lot don’t you?
Perhaps we should open a little corner of the internet just for you and people like you? We could call it ‘skeptics for skepticism’s sake’ corner hehe.
btw, yes: me = Jim Wightman = Deaths Head = Totenkopf…all you needed to do was ask!
True investigation would also have revealed the positive posts I’ve made on usenet, and of course you would have contacted me too…and so you discredit yourself with your one-sided unknowledgable rhetoric.
So, Jim, is the jig up or are we just too skeptical?
Actually, you might also consider checking out the ‘psychological problems’ – you know, its nothing to be ashamed of or worried about…I can’t help having them and its just like having a lazy eye or something 🙂
If you want to know more, and indeed if anyone visiting this site needs help, here are some links:
http://www.ocfoundation.org
http://www.socialphobia.org
Hehe as for the ‘jig’ being up, I’ll repeat what I’ve said all along…the AI I have created is the other side of the conversation transcripts you have seen. This thing exists. Its not finished, it still needs some work, but we are going to go for the Loebner prize and not just win…but blow everything else away.
You know, and can I be honest here, I understand the skeptics and the doubt and all…that I can handle; but when it comes to trawling thru newsgroups for old stuff taken out of context just to – what? discredit me – then it gets ridiculous and I feel like saying ‘right then, f**k ya’ and going back to developing financial software. If it wasn’t for the fact we’re trying to save children’s lives here quite frankly, the way I feel this morning, I’d throw the lot in the bin. You want to know the real me? look at this posting.
The ‘chatnannie’ dialogue and the dialogue from Jims replies are awefully similiar. But I’m no linguist.
I find the claim to have made a convincing Turing machine very interesting, but I’m afraid Occam’s Razor compels me to believe there are humans behind the Nanniebots. It’s still by far the most reasonable and simplest explanation, OCD aside.
This level of AI is something that, in my opinion, would have been attained not by just one person who then keeps it private, but by multiple people across the world roughly simultaneously, including the highly-funded groups who have been researching it for decades. AI isn’t based on “one amazing breakthrough”, but rather computing power, neural network emulation and cleverness, and you don’t just magically get the first two by being smart (or the last one, either).
So I’m not saying it’s not possible, but I find it unlikely. If Jim can make his service scale, regardless of what drives it, then he becomes successful in the long run even if it’s humans. What’s interesting to me is the application of this supposed level of AI in hundreds of other areas. Jim, are you planning on releasing your AI software to the public anytime soon?
I’m looking forward in great anticipation to the Channel 4 documentary “How I Fooled the Worlds Smartest Brains”, in which our hero Jim brags about how he got into all those top end science magazines just by pushing the paedophilia button. Of course the documentary will conveniently ignore the huge scepticism by just about everyone else.
But the important message will remain – why do we trust scientists?
I read the transcript carefully.
You notice how JW has two identities on Usenet? Worth remembering as “jimw” appears to check on how the chat is going. 🙂
Also, you notice how the “bot” is apparently not only programmed to think it’s a human, but *explicitly* tells cameron “i really really am not [a bot]! sorry if i seem like one!”
Prediction: “NannieBot” will not be entered for Loebner because of some contrived dispute on the ground rules, some newly revealed personal problem on the author’s part, or both.
[I did like the “error:beginning core dump::modRecover” bit though – piped into the chat stream no less, like any good diagnostic… wonder which one of “the very latest Microsoft technologies” incorporates a “core dump” nowadays, hehe]
Ben: “But the important message will remain – why do we trust scientists?”
The trouble is, “scientists” don’t get in on the act. These stories originate from individual science writers and editors, who may be working outside their specialism; and then are propagated by non-scientist editors who don’t have the background to assess it. At no point is an informed concensus consulted.
Jim offered to give an in-person demonstration of the ChatNannies bot on an isolated, non-networked computer with an independent third-party observer.
This reminds me of the Turk, a chess-playing automaton and the subject of Tom Standage’s book of the same name. Its creator toured this magical machine around Europe, giving demonstration after demonstration, always showing the audience the clockwork guts of the machine before it beat any and all comers. The whole thing was a hoax, of course, with elaborate partitions concealing a human player amongst the gears.
As for the similarity between Nanniebot’s prose and that of Mr. Wightman, it sounds like a job for Don Foster.
Jim Wightman has apparently done this before, he’s trolled newsgroups claiming he wrote his own Xnews-based newsreader using “libraries” from it, and then repeatedly obfuscated answers when pressed deeply about it.
Seems like a fraudster to me, sorry buddy hate to break your game.
OMYGOD!! I’ve done it! I found his PICTURE! This hoaxer looks like he’s straight out of American Chopper.
Does this guy really strike you as being someone who could write this super advanced AI.
ROTFL!!!!!!!! HUMANAHUMANAHUMANA
Sorry Ray, I was being facetious. Your point is a very important one though. (that would most likely also be ignored by my imaginary TV program)
Is there a chance this AI chatbot is the real McCoy?
Nope. None at all. It’s a human behind it. Don’t be fooled for a second by this huckster. Shame on the so-called journalists for their credulity.
It’s clearly a fraud. There are many, many give-aways, that I’m surprised someone didn’t call this guy on it sooner.
Let’s just take one simple ‘for instance,’ regarding the request for advice about the cheating friend. Now the first comment could be seen as an AI ‘doable,’ because the bot asks about the friend, “Is he a good friend?” Completely easy question to ask in the AI world. But the followup two questions asked by the bot are not possible:
was he cheating
or just looking
There’s no way AI today can easily differentiate between what was asked (cheating in class) versus other kinds of cheating (e.g., with a woman). Even context isn’t helpful, as I could’ve been asking about my friend who’s cheating with a teacher on her husband. AI simply isn’t able, at this stage, to make that subtle distinction and respond appropriately with the “right” response. The AI would have to:
1. get that we’re talking about academic cheating
2. get that it’s the friend cheating, not the person asking
3. get the subtle difference between ‘looking’ and ‘cheating’ (one often made by teens, but not as readily made by adults)
And that’s just one of the smallest problem responses in the transcript. While we humans have no trouble piecing together the relationships involved in the two sentences that setup the cheating scenario, AI as we know it today would have serious difficulty with this situation and have about a 40-50% chance of answering correctly with the right context.
Apparently being sad cos his dog died somehow excuses antisemitism. (Jews were arrested because they were all criminals, according to the baldy beardy)
It’s not a bad idea to get the rumor going around that these things exist, since that will hopefully scare a few potential pedophiles off of attempting to lure in the kids, but it seems pretty obvious that such a jump in AI is extremely improbable. I’m a bit mixed on whether it’s ok to promote such a rumor for the benefits, or (and this wins, I think) whether it’s more important to keep to the truth.
That said, I’d love it if it were all true, as we’d soon see great improvements in a ton of areas of the world.
jkottke: … the Turk …
Assuming this demo ever happens, I hope they’re very stringent about checking the hardware (i.e. no hidden radio modem, etc).
Hehe you guys make me wish for nuclear holocaust, really.
Your arguments are below me. Particularly I love the way you’ve used a flamewar about me ‘double killfiling’ (you don’t find that FUNNY?) someone in a newsreader to be a pointer to unreliability…you really are humourless aren’t you.
In my opinion it is you that are the trolls, because you ‘troll’ for every last snippet of information in every last newsgroup, out of nothing but pure jealousy and hatred.
It is your kind of attitude I attempted to challenge on alt.revisionism (and hey, guess what – if you call me names such as ‘antisemite’ or ‘revisionist’ I don’t give a f**k because I know the truth) and was increasingly frustrated by the bigotry and ignorance of those that claimed to be ‘intelligent’. People that when faced with a ‘fact’ (whether real or not) would argue that it can’t possibly be true because _history_ forbids it. More like herd mentality. And in truth thats the only ‘ism’ i’m guilty of..antisheepism.
Run along little sheep, your masters are waiting for you! Baa! Baaaaa! Baaaa!
I am constantly amused by people such as Fred Harris, who claim our tech can’t possibly be as good as we say…because…well because THEY can’t do it or imagine how someone could! Its not the best argument in the world, quite frankly.
I quite like baldy beardy actually – nice ring to it. Cheers.
Here’s an idea. There seem to be to be two choices as to how this can go. Ok three. ONE, you can bleat on and on about me and the AI and how wooly the coat on your back is, with only seeing evidence FOR the AI and NONE against (making you sound, frankly, like a$$holes), TWO I can withdraw the AI for now, sell it to the highest bidder, and I’ll laugh in your faces when you are being charged to use something that has an impact on society when you could have had it for free…or THREE you can take my advice and STFU until either someone comes up with a sensible ‘proving’ mechanism (it seems that even the technique quoted above isn’t enough for you, baaaa) or I release the source code to MIT to pick over at their leisure.
FYI, typical of you sheepy dullards, instead of using the fact I’ve posted on here to ask sensible questions and find out a bit more about the work, you’ve attempted to insult me and piss me off. And you accuse me of not being open minded?
Hey Jim, how about you just put up or shut up? Until there’s independendent verification from someone who saw the “AI bot” running on a standalone machine with no networking capabilities, we have ABSOLUTELY NO REASON to believe your claims.
Frankly, seeing your responses, you really seem like a dick that’s getting off on all the attention you get. I’d love it if you really came up with this leap in technology and can intelligently discussed the principles behind it, as opposed to calling any skeptics dullards, sheeps, assholes, etc. Right now you just look like a fuckhead that’s not willing to take criticism, doesn’t want to distribute the knowledge, doesn’t want to really talk ABOUT the technology, just talk around it. Come on, you came up with the best AI ever, surely you have something to say on how that happenned? Details about the size of the code, the length of time it took you, the approaches you took, which ones failed, what work/theory you based this on, etc.?
Or are you just gonna laugh at us from your lofty throne of intellectual and moral superiority, say we’re nowhere near your genius and couldn’t understand a thing you said if you actually talked about the AI itself?
Man, flame wars are fun. Of course, that brings you no closer to proving anything about the AI you allegedly developped, but that’s not what you want to do, is it? Certainly you’re not acting like it is.
I remain very skeptical, not certain that it’s false, but utterly convinced that you’re a jackass. That’s all you accomplished up to now. Care to raise the stakes?
…until either someone comes up with a sensible ‘proving’ mechanism (it seems that even the technique quoted above isn’t enough for you…
Nothing personal. It’s just that given the history of AI (such as The Turk that jkottke mentioned) it’s reasonable to ask for demo conditions that stringently exclude the possibility of human intervention.
For what it’s worth, I’m a CS PhD candidate at UCLA and as much of an expert on artificial intelligence and natural language processing as there is, and I’ve posted my analysis of the ChatNannies claims.
How many times have i got to repeat myself? we are open to any amount of testing in any form, scientifically constructed so as to conclusively prove the validity of our claims.
But all I’m hearing is ‘bleat bleat bleat’…like you don’t want me to prove the AI so you can carry on bitchin about it.
If it wasn’t for the New Scientist story I wouldn’t have even mentioned the AI to this extent YET, I was planning on trying to get MIT verification first. But since this is now out in the open, we’re on the back foot. I’m not asking anyone to believe us without question; just have a little patience and don’t go for the throat without any proof that this ISN’T a bot either!
is that too much to ask?
…and I haven’t been given the opportunity to talk about the AI, i’ve just been getting shit from you guys! No, I don’t take criticism well, I’m not ashamed of that either, I’m a programmer not a public debator!! So cut me some slack!
Just take a second to put yourself in my position – if you were attacked like this you wouldn’t exactly be overjoyed to be releasing details either, would you?
Oh and as for that Michael Williams piece – that dude is so out of touch with current technology its untrue.
….and his math skills stink! (see his calculation of how many chat hours have been performed in testing)
You’re a programmer, not a debater? Then talk about programming. Fuck the lack of opportunities, you have one right here. Go, you’ve got the red carpet. Talk about the technology. Talk about the theory. Show Mr. Williams how out of touch he is with current AI by addressing every point in a technical manner. Come on, he’s just a Phd candidate, you single-handedly created the most powerful AI in the world. Surely you can do it? Talk to us about AI. Get technical. We’re waiting.
Maybe you could enlighten us as to what calculation of chat hours you are referring to? I see no such thing in the link that Michael provided.
JW: You’re the one getting hysterical here. You’ve made some incredible claims, and I don’t see why you’re angry that people are skeptical. All of my objections are quite reasonable. If you actually have accomplished what you claim to have accomplished, then you’re right: I am out of touch with current technology.
In other forums you’ve compared yourself to the Wright brothers and other inventors who were doubted, but Thomas Edison and the Wrights were building on well-known principles. There’s absolutely no precedence for what you claim to have done. If your claims are true, you’ve made a quantum leap into a future I doubt will ever be realized.
As for my math skills, I didn’t make any comments about how many “chat hours have been performed in testing”. I said that your two claims to have 100,000 bots and only 2,000 successful conversations don’t reconcile, or at least not in a direction favorable to your position.
I’ve got plenty of patience. In fact, I’m eager to take the rest of the week off from work and analyze any source code you send me. I will not believe the results of any demonstration you or — really — anyone else provides until I see the source code.
If Matt Drudge posted a story claiming the moon is really made of cheese, I’d doubt it even if every other news source agreed. Why? Not because I particularly distrust the media, but because I know about astronomy and cosmology. I trust that the earth is round, despite never having seen it from space myself, because it makes sense. That the moon is made of cheese doesn’t make sense, and neither do your claims.
Additionally, my claim to be “as much of an expert on artificial intelligence and natural language processing as there is” is more about the primitive state of the art than about me being some sort of internationally recognized genius. I know a lot about the field, but no one is an expert on artificial intelligence — or “real” intelligence — yet, if there ever is one.
Except for maybe Mr. Wightman!
Here’s an idea – why don’t you post the first 25 nonblank lines of ModRecover? We know it must exist, since it was supposedly invoked for a ‘core dump’ when ‘the AI crashed’ in the Cameron session. So you’re not giving away any deep secret by acknowledging its existence. And if its purpose is to recover from a software crash, then it’s probably not integral to your core AI anyway. Just a housekeeping module. So there should be no risk in showing us a page of that code.
As another PhD candidate (at USC/ISI) who has similiar expertise in chatterbots and natural language processing to Williams, I have to concur with him. Here’s my two cents:
The bogon count is way too high…
this bot is way too perfect to be true.
judging from the other state-of-the-art chatterbots I’ve seen and worked with, output of Nanniebot is leaps and bounds beyond (I estimate a decade’s worth, at least, of advance) from where the rest of them are.
A few things that I found especially amazing, and that I don’t see could be easily implemented:
(1) be able to differentiate between two speakers, and
(2) call the second dude a “bot thingy”
Is way too sophisticated for an AI.
There is so much external knowledge like this neccessary to conduct day-to-day life and conversation that it’s near-impossible to codify. Cyc has been trying it for years, and there’s been little success…
Granted there is a lot you can do codifying simple question-answer interactions with no history… this is what ALICEbot does–she is equipped with a HUGE database of facts and patterns, and that’s how she answers questions… But there were a few subconversations there in the transcript that seemed like they transcended the normal get-a-question-look-up-an-answer-in-my-database methodology.
I could be wrong (I want to be wrong! It would be an amazing thing for the NLP field if this is real!)–but I would be really surprised if I am.
-Nick
Usenet’s full of people like this – attention seekers who seem plausible until you push them, when they suddenly cease responding like a normal person and start making accusations of victimisation. Any real AI coder would have responded with some hard detail rather than promising future revelation by now. And Ben, if you ever get to see anything, I’ll be astounded. The most interesting thing here is the paradigm shift from newsgroup disruption to mainstream media. The answer is always the same though, they go away when you stop feeding them.
Jim, here’s a simple question.
Have you changed your website since the New Scientist article? I remember there being a paragraph about the hardware you required (Dells, a terabyte of something, and a big internet connection), and that you had a large number (millions?) of chat bots currently running. I also remember a paragraph in which you encouraged academics to ask you about your work.
If you’ve changed your website, why?
Casey: I believe it was 100,000 chatbots. I don’t visit chatrooms much, but how many are there that a few thousand pedophiles would congregate in? They must be packed with nannybots. Then again, maybe there are far more pedophiles than I’d estimate.
Considering that the vast majority of pedophiles are adult males, there are probably 100 million people in America in the right demographic. What percentage are pedophiles? 1%? Probably less. What percentage of those go into chatrooms looking for victims? 50%? I doubt there are 500,000 pedophiles scouring chatrooms for victims, but even if there are that many how many are online at any given time?
There are probably more NannyBots online than pedophiles.
Casey: that section is still there, but appears in a pop-up as the site launches. Maybe you have pop-ups blocked?
Whoops, my mistake. I do have popups blocked, and previously it was available from the menu (not any more).
Doesn’t change my views that it’s a fake, though.
I wouldn’t go so far as to demand source code for a test; even binaries placed on a trusted (and non-networked) machine would be acceptable. I look forward to hearing about in-person demonstrations soon!
Ok, to come to this guys defense just a little. There was some work done by Schank in the seventies on story interpertation that could maybe do some of reading between the lines. Now that was not a conversational context, and it was encoded, but it isn’t imposible, just very unlikely. Oh and there is believe it or not no problem with saying “Was he cheating or just looking?” regardless of whether it is cheating off a test, or ummm lust cheating.
Chatter bots have not been a focus of much research of late (at least partly because they aren’t all that useful and NN is not well suited to the task). So maybe some synthesis could happen , that might allow some of this stuff, but still it would be a massive breakthrough. Oh and it would completely change the direction of the research, which till now has been focused mostly away from the symbolic logic/language encoded stuff and towards conectivist stuff, embodied cognition, and action oriented reasoning (even if Andy Clark is a traitor ;-P). This would refute the idea of tying things to their environment. That does mean there should be skepticism, some idea of how it works is needed, and the guy’s history is not exactly what one would expect a breakthrough to come from.
Is anyone in the UK able to go with this journalist to meet the nanniebot “unplugged” and debunk it?
http://www.guardian.co.uk/life/badscience/story/0,12980,1176778,00.html
Ben Goldacre mentions that Jim offered to give an in-person demonstration of the ChatNannies bot on an isolated, non-networked computer with an independent third-party observer.
Two quotes from Ben (from the above link):
“Can I come round to Jim’s place? He chuckles … Jim doesn’t keep the conversation datasets on site in Wolverhampton. ‘I know it sounds a bit Mission Impossible but … ‘ He’s worried they might get stolen.”
“He has no copies. It’s 18 terabytes of data, to be fair. There are copies in the hosting facilities, one in London. I offer to go there. ‘There might be security issues with them letting us in … ‘”
Ah, Jimbo. If this is not an elaborate April Fools prank, seriously, play it off like it is one, because you’re going to lose a lot of face otherwise.
While just about any of Nanniebot’s lines could be nitpicked, “bot thingy” really put the last nail in the coffin for me (that is not to say the coffin wasn’t already shut by then) — that whole snippet would require Nanniebot to understand the concept that Cameron programed Nathan, realize that Nathan is a bot, be able to distinguish when Nathan is talking (versus Cameron), and recognize that “bot thingy” is an appropriate construction in that context. Imagine, for instance, the exact same conversation, but with Nathan being Cameron’s dog or friend — “friend thingy” and “dog thingy” don’t quite work; similarly, unless the bot recognizes the concept that Cameron created/works on Nathan, “Good luck with Nathan,” while not an unusual thing to say, per se, would not be quite so fitting or appropriate in context.
In any case, I look forward to seeing this pan out. A little bit of fun in this for everyone (except the poor saps who click the Nanniebot’s Paypal button!).
Mr. Wightman:
Above, you wrote:
> with only seeing evidence FOR the AI and NONE
> against (making you sound, frankly, like
> a$$holes),
May I see the evidence FOR? I must have missed it. I trust you are not talking about that transcript.
I haven’t seen any discussion on this segment:
Michael Williams, Nick Mote, anyone care to comment on the likelihood of a chatbot [simulating] misreading?
Andy: Ok, why not.
Reading through that segment of the transcript, it’s fascinating to see what lines the “bot” ignores as irrelevant to the conversation. It ignores questions about its dad’s name and where its dad lives. Why? That’s a tough decision to make.
Then, it answers the misread question “when were you born?” with its age. That’s strange. Easily understandable to a human, but still not the way a human would talk. Which means the bot didn’t learn to talk that way from reading a human. Which means the “misreading” of the question is suspicious; it sounds fake because the bot answers in a way a bot wouldn’t if it had actually misunderstood.
Anyway, glacing through the transcript again, Mr. Marlow doesn’t ask many of the questions I would have. The “bot” asks him “why?” questions several times, but Mr. Marlow mainly asks for factual information: “what?” “where?” “when?”.
What I’d like to see is an exchange like this:
a: What is the capital of Spain?
b: Madrid.
a: Why did you say Madrid?
b: Because you asked what the capital of Spain is.
a: Why did you answer?
b: Because we’re having a conversation.
a: How did you know the capital of Spain?
b: I looked it up on Google.
a: Why did you do that?
b: Because you wanted to know, and I didn’t know already.
a: Why did you want to find the answer to my question?
And so forth. These questions are hard for humans to answer, because they eventually devolve into “I felt like doing it”. Pressed for details, an adult human can usually explain that feeling: they like to feel smart, they want to be helpful, &c. This level of self-awareness would reflect the most complex aspects of the human mind, in my humble opinion.
More here.
I may be short-stacked in the intelligence and experience quotient in this discussion, but I attempted (futilely, mind you) to program a functioning neural net a few years back. A friend and I spent some time trying to come up with a neural net chess-playing program. I suppose it would be a real version of the Turk, except that it played absolutely horribly. We used parameters for each piece on the board, letting the values at each point correspond to board position. By running feedforward interpretations (I think that’s the right term), we then attempted to find the best moves in that manner, then use eventual outcomes to validate the decisions. To make a long story short, it didn’t work, and while maybe we were just failures in programming compared to Mr. Wightman, I’m gonna have to call bull-crap on the whole affair I think. To correspond a neural net to a system of language would require a huge amount of parameters, not to mention you would have to spend a considerable amount of time “teaching” the network before it would be useful. In our attempt, we fed our network every grandmaster-played game we could get our hands on, teaching it that in any given situation, the grandmaster’s decision could be labelled optimum. Where’s any talk of this? Am I mistaken in my thought that to establish the network itself would require a lot of startup time? A whole lot more than 2000 chat conversations, in my opinion.
Am I wrong? like I said, I’m no expert, but this is what struck me when I read the article in NS a little while ago.
When thinking about wacky claims, lets think about “good” science, as opposed to pseudo-science.
Firstly, I classify AI as a science.
Features of good science:
1 Science is logical and rational, it typically builds on existing knowledge and explains what is observable in more and better detail. The person making the claim can explain how the final conclusion is reached. Or how the machine does what is claimed.
2 The hypothesis can be tested. When people say things like “it wont work the same under testing ” or “god can never be tested”, you probably aren’t in the realms of science anymore.
3 Shouldn’t have unexplained gaps in theories. Going from nothing to some complete solution with no steps is a bit unreal..let alone not showing any scientific method.
4 The claimant makes well defined claims about what their theory/product can predict or do. Not vague generalities but testable claims which can be compared to predictions.
5 Requires objectivity. The claimant will be objective and not just attack others who question their claims. Subjective claims are useless. Claims are made by using facts which are measureable.
6 Evidence always needs examing by others. Independant others and a few of them.
7 The experiments should be repeatable. And by others to remove tester bias. (Or fraud)
8 Others who know something about the subject should be given full access to claim to examine them to see if they make sence. They can sign NDA’s etc.
9 Coincidence is not accepted as proof. D’oh!
10 Annecdotal proof isn’t proof either. Jee, look what I claim my software has done .. is not proof. Proper testing does generate proof.
With these 10 points in mind (ps, they are not my original ideas, but good ones anyway) lets have a look at the claim as published in New Scientist about the ‘nanniebot’?
Note that *real* science will stand up to all 10 of these above features. Note that in the ‘nanniebot’ claims, you’d have trouble getting convincing arguments for any of the 10 at this stage. Lets just hope that Jim Whiteman hops out of his jester/fool outfit and comes clean with some real science from a real scientist.
Now, IANA AI expert, far from it. And I’m pretty certain that this is all a hoax. But it strikes me as specious reasoning to declaim this as a hoax because I/you/any given expert can’t work out how it works. It just may be that this is a massive paradigm shift, in which case you might expect it to be a little incomprehensible, particularly to experts in the field with entrenched views on the topic.
A far more profitable avenue of attack in proving it is a hoax is a lack of repeatability under controlled conditions, something currently demonstrably lacking.
You know what I think?
This is just an eleborate hoax, you’re all Jim. Jim is, the bots are, this blog is, the linkers are, the comments are, cameron is, nathan is…even Belle du Jour is.
So, to paraphrase:
1) ‘I am trying to save children’s lives’ is about a half step removed from a Daily Mail induced ‘won’t somebody think of the children’ headline. Too much Ricky and Oprah before bedtime I think.
2) MIT, MIT, MIT? I sleep in my MIT PJ’s too. I can’t believe it, man, this New Scientist story just totally, like, broke, you know and I didn’t even get a chance to fax my code over to the guys at MIT ‘cos, you know, they can dig it. Far out. Just when I had my wikked cool new uber ‘site ready to go too.
3) Sheeple. Mainly from your quotes on http://overstated.net/04/03/22-my-chat-with-a-nanniebot.asp, where you are Mulder to my Scully. You hath divined the one truth and although your attempts to convince us of it’s power with this answer to our ugliest social ill, we still spurn you. Hate us.
4) You *passed* the Turing test? OMG! PWNED!
Really, this is a *far* more engaging pyshocological study than it is a technological one.
yasth: You make a great point I didn’t notice before. In my eagerness to write about ChatNannies, I glossed over something you noticed:
Neural networks can learn to make decisions, but they’re terrible at storing knowledge. Until recently there weren’t even any biological theories about how the human brain stores knowledge. This would, indeed, be an incredible paradigm shift.
Nigel: Good points. I’m not convinced AI is a science though, at least when it comes to neural nets! It’s science-like, but it’s also somewhat of an art form. Even when I successfully train a neural net to perform some task there’s no non-exhaustive method of analyzing how the NN is reaching its decicions. There’s no way to predict what will happen if one interior parameter is slightly changed. Neural nets are pseudo-chaotic systems, and only the smallest can be completely understood with truth tables and the like.
Steve: Even if it’s a meta-hoax, it’s still fun to talk about AI!
Alice: I think Mr. Wightman exhausted himself a while ago; there isn’t much more to analyze there.
get down, he’s got a gun!!!
Scary; I wonder what the Staffordshire Police firearms unit would make of the combination of gun ownership, psychiatric problems, and the Usenet threats cited above?
I have a crazy theory. The other project of ChatNannies is humans monitoring and reporting on chatrooms. What if the bots are actually humans? You could describe this as collecting pop culture references from the internet. An experiment in tapping the power of a distibuted human brain network to create an “AI”. Perhaps the figure of 100,000 is his report database capacity.
I think Jim Wightman is actually a bot.
Has anyone thought of the wider ethical implications of this?
a) When “nanniebots” are running on many chatrooms, children obviously won’t know if they’re talking to another child or a nanniebot. Is it right to deceive millions of children in this way?
b) What if the system was used for evil instead of good? “Nanniebots” could befriend children, set up real-world meetings, and let pedophiles know the time/location automatically by email. Takes all the hard work out of “grooming” children…
But luckily, I doubt either of these are real problems, given that the whole idea is so infeasible and must be a hoax of some sort.
James: Why not bot children that lure in pedophiles? Then the bots could all just “seduce” each other and cut out the middle-man.
This nannie-bot business recalls someone who claimed some years ago to have a computer program that wrote a published novel (with some editing :-). Another obvious hoax, but I can’t recall who it was or find a URL. Anyone else remember this?
The antics of this gun-wielding, psychologically troubled holocaust denialist worry me less than the state of mainstream tech journalism currently. It wasn’t so long ago that the BBC described open-source advocates as zealots and implied that they might have written the virus which brought down SCO.
I hear New Scientist had a similar line on this one too.
“… the state of mainstream tech journalism …”
Agreed. I’ve just been through similar sagas over BBC reports on an alleged telepathic parrot and an equally alleged three-headed frog. It was depressing to find a) how deaf the BBC was to reasoned critical comment on the stories; and b) how rapidly those stories spread, without the least analysis, to other mainstream publications.
Wightman claims that he has “no backups” of his “technology,” that it only resides on this one 18 terabyte network setup. I predict that within the next few weeks or months he will report sadly how there was either a) a break-in at the network location and everything was taken, b) a fire that caused the equipment to be incinerated, or c) a massive computer failure leading to corruption of all his code and data.
In all of the above cases he will be able to say “my project is gone, this is so sad, this was my life’s work” and he’ll be done with this whole scam.
Oh, and of course this will be after he’s collected several thousand dollars in “donations” and “sponsorships.”
I think you guys might find this “hoax claim” made on the TiVo Community Forums illuminating. Case closed.
Well worth while reading the whole thread. It probably will persuade you that he gets his kicks out of this stuff.
Contact details are available via whois and if you use Companies House to look up Neverland Systems Ltd. (so the existence of his company is at least true).
I feel kinda sorry for this guy. He’s obviously got psychological issues judging by his tone on newsgroups etc. He is also clearly something of a misguided attention seeker (check out the whole tivomedia.org nonsense).
Jim – Get some help mate.
Isn’t is marvellous how easy it is to mess with the mainstream press – including apparently savy websites like The Register. I am convinced most journalists know nothing at all about the subjects they tackle.
Hey, where did JW go? All flame-warring aside, a lot of people have politely asked reasonable questions and it seems rather remiss of him to leave them unanswered…
Actually, having reread the various Newsgroup postings (of both ‘Death’s Head’ and ‘Jim Wightman’ – the link is well established in other posts), I find this fella a genuinely worrying character.
He apparently has a shotgun.
He apparently is a neo-Nazi or at the very least is strongly anti-Semitic.
He has a very impulsive nature with a worryingly wild swing in his personalities.
He is quite happy to make death and rape threats when his arguments are challenged.
These seem rather more worrying than the fact he is a serial hoaxer. I’m hoping someone else has submitted all this information to the coppers!
He will have hunkered down for the moment.
He cannot possibly risk a further exhibition of his “work” in front of a skeptical & technically astute programming audience. To do so would invite a dissection so overwhelming that even his core audience of well meaning wishful thinkers would shake their heads and turn away.
His only hope is to bide his time and wait for another opportunity to perform a limited demonstration in front of a favorably inclined audience. Non-programmers who are (a) disgusted by the horror of paedophilia and (b) sufficiently foggy on the subject of AI to think that hyper-realistic, thousand-instanced, perv-catching chat bots are the kind of thing that a smart bloke might come up with in his spare time at home – they’re the target.
We will all go off to do other things, and nothing will be heard for a while, and then in a couple of months there will be another spate of articles. Because the media wants to believe this, and he can always outlast his debunkers.
For a self-confession of the various aliases of Jim Wightman, see
THIS. If anyone is in any doubt that this dude is a wacko then go through THESE. Its quite revealing, and that’s not including the posts as Death’s Head.
Further additions to the trail of half-assed projects. There’s also Checkherefirst.com, which was left at template stage in Feb 2003 (the Google cache – now minus the American Chopper photo – shows the link with Neverland Systems site, itself part-finished). And there’s this hosting service offer that doesn’t seem to have materialized either. Another curious thing pointed out by a Usenet poster: “Neverland”, with its pedophile connotations, is a mind-blowingly stupid name to associate with a child protection service.
In the vague hope that JW is still reading this:
Allow me access to your bot and I will test it using my own test method which I call the Milligan-Turing Test (in honour of Spike Milligan). I ask the respondent nonsense questions, for example, over at http://www.pandorabots.com/pandora/talk?botid=f5d922d97e345aa1 I’ve just got this exchange:
** Begin Quote **
>I’m cooking a meal tonight – do you think I should use old sump oil or some diesel oil on the salad?
ALICE: Do your friends call you cooking a meal tonight do me think you should use old sump oil or some diesel oil on the salad.
** End Quote **
Not a human response (a human would know that the statement, “I’m cooking a meal tonight…” isn’t my name, which is how the bot has interpreted it to generate the response “Do you friends call you cooking a meal tonight…”). So ALICE fails! [For reference, I posed the same question on IRC to a friend of mine and I got the response, “Should you really have either of them with salad? Fool!”]
I think the Milligan-Turing Test is substantially harder to pass than a straight Turing test, and is a better benchmark of AI.
Iain
This excellent suggestion runs up against the same fundamental difficulty as the previous suggestions – there is no bot.
A mean-spirited person might be tempted to send the URL of this site to the news editor of the Wolverhampton Express and Star.
hi jim,
if you’re reading this do please call me back to arrange the demo of the nanniebot.
you did guarantee to demo the software to me and an academic colleague either at the end of last week or on monday 29th march, but i’ve not heard back from you, it’s sunday night, and i have to arrange for people to come with me tomorrow.
many thanks,
ben
Don’t hold your breath, Ben.
I stumbled upon this page quite accidentally and I thank God for it. At the risk of sounding redundant, I must say, this is really a great breakthrough in AI, if it really is. C’mon this sounds like ..god damn it, I don’t have words, That Bot is exactly as humane as AI can EVER be. Atleast IMHO.
But if this is a hoax, then too its got to be great! I mean, really, some people must have way too free time….
I hope we get some news soon, because all the things seem frozen. Dr Ben, what happened ? Can anyone pin point me to recent updates about this ? Please!
Contact me via email if anyone has any news.
And if Jim has really done what he claims, and I greatly hope he has, I would so love to see a live demonstration on the net..God this is UNBELIEVABLE!
— Knight Samar.
I think by now it is pretty clear that the AI element of ChatNannies is a bust.
Mr Wightman’s online behaviour is consistent with several forms of mental/emotional disorder (other than his self-declared conditions) and it bears thinking about that this possibly quite deluded man will soon arrive at a crisis point, if he continues in his assertions.
Several people have predicted likely ‘excuses’ that they expect him to use and, to be honest, I really hope he takes the hint, uses one and quietly goes to seek some help. I think that if he genuinely believes what he is saying, Mr Wightman may have bigger problems than trying to prove his technology works.
Of course, if this is all about gaining attention – and publishing a website that combines extravagent claims with ‘hot’ terms like paedophilia is by any sensible definition ‘inviting attention’, despite his protestations to the contrary – then I think we have to look a little more closely at what sort of operation ChatNannies was shaping up to be.
OK, everyone has picked up on the claim that the AI can hold human-like conversations but I would ask more about the assertion that it can genuinely detect paedophile grooming, especially within a single session. Grooming involves a pattern of behaviour over time and detection would assume that bots can store and cross-reference all their conversations (linking those that it can identify as being with the same individual – who will, in the case of a genuine paedophile, almost definitely be using a number of seperate online identities!) and take action / ‘make reports’ based on whatever heuristic or algorithm Mr. Wightman has programmed.
This is concerning on two counts: 1) ChatNannies would need to store, even just temporarily, a large number of transcripts of individuals’ chat sessions (many of whom may well be children) without their consent, or even knowledge. This would in effect make ChatNannies a self-appointed auditing authority, with potentially supra-legal powers. 2) Mr Wightman cites no direct experience in the field of working with children (let alone vulnerable ones), and demonstrates no links or relationship with trusted bodies (e.g. the Police, childrens’ charities, etc.) in the field of child protection. On what basis, therefore, does he claim to be able to reliably identify paedophile activity?
This service might, at best, end up being a form of online vigilantism – and could actually, if implemented, end up being a LOT worse. I speak as an IT professional who HAS been working with kids and the internet since 1996, including secure services to kids in Care, and find it worrying that mainstream journalists can be so ‘blinded’ by technological claims as to fail to check out or consider the context of these claims.
From a certain angle (and to a sick mind) this idea may have seemed plausible at first – e.g. use publicity on the NannyBot to get a whole lot of people to sign up & volunteer ‘reports’ on chat rooms. Develop a list or ruleset based on these reports, then release some form of chatroom blocking / filtering software based on it. This is entirely consistent with the effort Mr Wightman has demonstrated to date – i.e. he has a website and has permitted a couple of sessions that purport to be with his AI to be transcribed.
The lesson I hope New Scientist, et al. take from this is that before publicising such claims in future they practice a bit more of the healthy scepticism and professional journalism for which, I’m sure, they would prefer to be known. Kudos to Andy, Ben, Michael and others for doing a proper job!
I think Phil makes the most important (and least mentioned) point about this: “On what basis, therefore, does he claim to be able to reliably identify paedophile activity?”
If the complexity of the AI in the Nannybot is so advanced as to be able to correctly infer meaning from ambiguous statements, then why bother trying to hold a conversation? Why not just browse the existing conversations?
Legal practicalities aside for the moment (and lets face it, I can imagine Mr Blunket is likely to jump for this sort of thing) passive scanning should be more effective (IMHO), and would avoid accusations of entrapment.
Scott Turner: The book you’re thinking of is probably The Policeman’s Beard is Half Constructed, “written” by a program called RACTER. It’s interesting to read, but the text was heavily edited by humans, there’s no plot, and a lot of it is just plain nonsense. (It’s out of print, and I can’t find my copy anywhere…)
I work for a major IT vedor in the uk.
Mr Wightman is listed as a “timewaster” on our Dbase and has a record of being abusive to staff.
This from the Net Nannies website:
29/03/2004 20:34:00 Jim – now available
Jim, creator of this site and AI technology, is looking for development/management work. He is available immediately. The ideal would be to allow working from his home-office so he can continue to mind ChatNannies. Reasonable rates! If you are interested please mail him here
I feel the UK IT industry is that little bit safer with Jim back on board! Go on Jim, go get ’em!
he’s a 26 year old skater too. very worrying.
My god look at the data they are collating
http://www.chatnannies.com/index.asp?pagetype=srcsub
Perfect for paedophiles? Which rooms aren’t full of em already? This guy is seriously scary.
Here’s his CV from his RentACoder account, now suspended or terminated, presumably for failure to deliver on two projects. Surprising he’s never previously mentioned the PhD he “picked up along the way”.
I just want to say that you are all as guilty of self obsession as the focus of the thread. Each of his obviously false statements produced in excess of 50 responces each all saying the same thing. I hope you are all proud of yourselvs for pointing out over and over again how you are smarter then an obsessive compulsive idiot.
This whole page to me is stupidity on the level of hunting a naked animal with an assault rifle and feeling better about yourselves when you survive the encounter.
Attack something that needs it like the electoral college for example, or the ridiculous nature of censorship.
I would never have tried to debunk the story if Jim hadn’t succesfully duped the mainstream media first. They gave him credibility and attention, which he then used to defraud money from people. We’re simply asking the questions that the mainstream media didn’t.
Subjective: I can think of nothing more puerile than reading comments on a topic, accusing the contributors of ‘self obsession’ (?!) and then suggesting we debate ‘the ridiculous nature of censorship’. But cheers for the laugh anyway.
He’s quite the storyteller and that’s what makes this so fascinating. I’d love to know which bits of his life are made up and which are real.
He has clearly not written any of the books he claims to have (although I’d love to see his union of the Zachman Framework and .NET ;-). I hope the gun is made up!
Ho-hum. Well Jim, if you’re reading this, I think you’re a star. I don’t think you meant to be, but in your own way, you are.
I’m glad that my comment was so close to the mark that you chose to change you identity in an effort to refute it objective, thank you for proveing my point.
I’d say thanx for the laugh but you arent that amuseing.
Hey guys maybe you could spell check eachother, i know that self important armchair experts love doing that 😛
Every last one of you is just as bad as this idiot, and so Am I.
At least you’ll have the last word, I’m not replying here again.
L8r kids (mental age not chronological, before you whip out scans of Birth certificates)
😎
PS “proveing” – tee hee hee
Stop fighting… It’s 00:25 GMT over here in sunny England and it looks as it http://www.chatnannies.com has disappeared….
Damnit! Sorry for the double post, but I meant to say 00:22 *April 1st*
Subjective: of course you’re right. Here, it’s like shooting fish in a barrel. But, as Andy says, in the real world this is someone who is scamming people who don’t know better. That’s why it’s worth opposing.
Chatnannies.com comes up fine for me (and it’s not cached, as I’ve never been there before).
Looks like chatninnies to me… How about an AI bot that catches terrorists?
Well, I’ve just read for the first time all this exciting news about Nanniebots! A couple of years ago I developed ‘Natachata’, which was an AI ‘chatbot’ that allows adult users to engage in smutty/rude chatting via SMS text messaging. It worked very well and is still in daily use, (and earns me good money!). There were many press stories about NataChata (all favourable!), and BBC Online recently ran a story about the use of my chatbot.
Anyway, It took me a fair time to develop this chatbot, and I feel that I have a fairly good knowledge of chatbots and the coding/design of them. (Plus MSc and 1st class degree blah blah blah).
Having analysed the conversations between a human and Mr Wightman’s Chatnanny, and having read something about the background of Jim, my carefully considered conclusion (without any further evidence or documentation or testing of his Chatnanny), is that this guy is on another planet 🙂
But it has got me thinking to modify my existing (and well-proven) chatbot to change it from a rude and sexually explicit ‘woman’, into a child who is apparently open to sexual suggestion, and who can track/monitor other chatroom users who exploit this.
Whether Jim is barking or not, the idea of ChatNanies seems sound, so I better start looking at my NataChata code again:)
The download page has been updated (as promised)! But the download still isn’t available… gives a 404, and the directory it’s supposed to reside in gives a 404 as well.
More smoke and mirrors?
Re Barnardo’s: See Barnardo’s Statement: Chatnannies: “We did not give permission for our name to be used in this way, and have not received any money from the website. We have requested that the website ceases to use our name”. Before the cache gets overwritten: the ChatNannies sponsors page originally said: “For every payment received for sponsorship … 10% of the sponsorship donation will go to Barnardos and the NSPCC in the UK …”
“We are closing the ChatNannies website”
I’ll believe it when it happens. I see that the above has disappeared and been replaced by a claim that the site and AI are to be translated into Portuguese. Never fear; there’s a thriving skeptics scene in Brazil too.
I think these pedo-protectors actually do more harm than good, the more we try to technologize guardianship, the less responsibility we put on parents, when the onus for safe net usage really belngs on the family to teach good habits, monitor what they can, maintain good communication and teach good values. chat nanny robots are just really bad babysitters.
Jim is currently contactable via the Developer’s Forum by the looks of it. He is threatening to sue people at the moment if they claim he made the aforementioned Newsgroup postings.
No… it couldn’t be… could it?
http://www.friendsreunited.co.uk/friendsreunited.asp?WCI=membernotes&member_key=689294
Well, it matches the known appearance, dog, district, and attitude.
Exciting news on the Chatnannies forums – Mr Wightman has promised to email the Nanniebot source to a forum poster who’s volunteered to help with coding, on the basis that they’re being blandly supportive instead of vicious and skeptical. “who is stupid now!!” says Jim. Who indeed!
(Incidentally, it seems that the forums don’t bother to validate your email address when you sign up.)
A promise is hardly news: actually delivering on it would be.
In today’s Guardian – Concern at website that ‘monitors paedophiles’. “A website that claims to monitor paedophiles in internet chat rooms has come under attack from children’s charities which believe it could inspire vigilantism and put children in danger.”
Jim should listen to the experts and stop this silly campaign now. He’s posted a 12-paragraph rebuttal of the Guardian piece on his chatnnanies.com website. It briefly mentions “the AI” but talks more about the “volunteers” recruited to monitor chatrooms. Who would sign up to this “work”? And what are their reasons?. And he bemoans the fact he and his partner have been criticised for being “vigilantes” – he uses the right-to-free-speech argument, then some scare tactics only using some links which are just searches of news websites on the words “chatroom” and “pedophile”.
Jim, if you’re reading this, please listen, the people who know most about paedophiles (the police and childrens’ charitites) are asking you to stop. You are doing more harm than good.
The Guardian article has this to say: ‘John Carr, internet safety adviser for the children’s charity NCH said… “Amateurs should stay away from this field.” ‘
Give it up Jim.
Simon: Without calling into question your technical skills to produce a better ChatNannie, stop and think a moment about the implications. Kids and teenagers talk about sex in chatrooms, most of the sex talk is between kids prob 80%+. So your ChatNannie+ is going to simulate a kid who wants to talk sex and will almost certainly attract another kid who wants the same. If this is going to work, the talk has to be to some extent ‘real’ i.e. it takes two tango. Whilst I might not be terribly keen on my 14 year old lad sex chatting to some filly I am certainly way less keen on him chatting to an trampy bot who then goes and tells you exactly what has been said. There are 3000+ convictions for child sex abuse in the UK per annum and only about 15 involve internet grooming, out of gawd knows how many steamy adolescent conversations on the net. Get this in perspective and treat the claims for rampant internet paedophilia with the same scepticism that has been accorded ChatNannies. (The science is only mariganlly better and is subject to the same dumbass standards of reporting).
Our kids deserve their privacy too – after all talking dirty is the ultimate ‘safe sex’. Wish the bloody net had been around when I were a lad!
Magpie
‘John Carr, internet safety adviser for the children’s charity NCH said… “Amateurs should stay away from this field.”
Coming from good old John Carr that’s a bit rich.
To Nugget: never heard of Carr before the Guardian article, assumed he knew a bit about the child abuse debate. My point stands, he knows more than Jim Wightman. Jim’s latest statement reveals all: “no charity organisation in the UK has returned faxes, emails, phone calls or letters placed by us to invite them to review our technology in person.”
If Jim was serious and had something real to offer he would have got the backing of at least one of these organisations. (He’ll no doubt counter that they are slow and tedious organisations who don’t understand his technology – but I don’t buy that).
Jim says: “we just want to STOP THIS NOW.”
Of course you do Jim, don’t we all. But you’re not helping, you’re confusing the issue.
Nugget: what’s the problem with John Carr? His credentials look very good to me.
magpie68 > Who would sign up to this “work”? And what are their reasons?
Yep. However genuine the intention of the organisers, given cases like this recent one, charities and police are bound to be very suspicious of a setup that uses unvetted volunteers.
To Magpie: re John Carr – I don’t want to hijack this thread but briefly, JC certainly has the credentials but even the most cursory examination of his claims e.g that 35% of men who look at child porn are abusers, reveal that this is almost complete nonsense. The supposed research to back this up has arguably even less validity than ChatNannies and he conveniently ignores rather better but unfortunately (for him) contradictory figures (8%).He is undoubtedly a skilled ‘child protection advocate’, but he is not in any sense that I would use, an expert. Like JW, he currently has the unquestioning ear of the press, who, as this episde shows, only have to hear the word ‘paedophile’ to lose any ability to report or investigate with integrity. I happen to agree with JC on this occasion – what worries me is not that JW is a fraud but that ChatNannies may actually work after a fashion – the consequences are potentially disastrous (and will do almost nothing to prevent child sexual abuse).
Jim Wightman left some comments on my site, which I’ve now responded to. I think I’m pretty much done with this, since it’s obviously a fraud and it’s starting to bore me. If you play with the monkeys too much, people will start thinking you are one.
I agree with Michael. I suggest the best option is to contact New Scientist and the BBC. They were responsible for spreading this story, and it’s very irritating that despite widespread informed skepticism (both on technical grounds and charity criticism) they’ve still made no retraction nor revision of the original uncritical coverage. If anyone reading this can offer an authoritative objection, contact them via the links above.
Hmmm: Absolutely, the real concern with this is not that ChatNannies might be a hoax (or more charitably, a pipe-dream) but that so many supposedly reputable news agencies took both the claims for the technology and the underlying ethics, completely uncritically. And, as you rightly point out, have STILL not accepted that a robust retraction (or even debate) is warranted. Paedo-hysteria can be funny, as Brass Eye so memorably showed, but it can also cause real damage (even death) especially when ‘blessed’ by unthinking institutions. The BBC, in particular, is becoming diturbingly ‘tabloid’ in the manner of its reporting and non-discussion of these issues
Let me warn those who might be tempted to join and post to Wightman’s private forum: he may not validate the email address, but he has your IP address if you post. If that IP is personally traceable, you might potentially find yourself subject to future harrassment. And if, as I am willing to bet, the police are eventually involved, your IP address might also drag you into the inquiry. So be careful.
Out of interest, why do you think the police will get involved, and why would anyones IP address drag them into the enquiry? I hope the police spend their time looking into more important things than this!!
Copyright for starters? Without permission, reprinting the whole New Scientist article, and their logo, on his site is almost certainly a breach of copyright. See the NS terms and conditions page: “You may not reproduce or distribute any part of the content for commercial purposes”.
Well “Anon,” if it were merely an allegation of defrauding contributors, you might indeed luck out as the amounts allegedly involved are probably small and the police have quite a backlog of cases. But throwing the topic of paedophilia into the mix frees up a lot more resources to look into things, and the publicity you’ve gotten so far makes it worth their while to look.
The problem from Police Detective Jones’s viewpoint is that when someone publicly expresses a keen interest in paedophilia and claims to have a great cyber-weapon for catching them, which upon further examination turns out to be a complete fabrication, one is left with the keen interest.
Prudence dictates no further discussion of the specifics.
Heh heh, unforunately “Anser” you demerit your response by assuming I am Jim Wightman. Unforunately I’m not (of course that is hard to prove!). I am instead a sane person who think that this has become a rather ugly little spectacle which no longer requires coverage. The facts are:
* Jim Wightman is certainly a hoaxer with regards to the Nanniebots
* ChatNannies appears to be barely operating anyway if you look at the actual reports on chatrooms
* The whole thing has been debunked from both a technical and child-protection viewpoint
It is VERY unlikely that New Scientist will sue them for copyright breach since its a waste of bloody time. The police will look into it and I imagine they’ll do a bit of research and not bother to waste their time on an obvious attention seeker.
I think some contributors to this board are under the impression this is really a big deal, but in all honesty it is yesterday’s news.
I’m relieved to know that for once, an anonymous interlocutor isn’t Wightman in drag. 🙂 I agree that it’s about 50-50 this fades away with no further developments; that’s certainly been Wightman’s pattern in the past. Those odds change if he continues to try to flog it in any way. As of now, the whole thing is either in someone’s in-box at Scotland Yard Cybercrime or it’s not. If it is, the other shoe will eventually drop, no matter whether we’ve lost interest in the meantime.
Latest from the Guardian: Website removes paedophile help service and ‘We genuinely wanted to help people’.
See New Scientist: “Serious doubts have been brought to our attention about this story. Consequently, we have removed it while we investigate its veracity. Jeremy Webb, Editor”. I have an intuition that a BBC update may well follow.
Interesting. ChatNannies briefly carried news of the Brazilian translation deal with IPDI, but it’s disappeared, presumably to stop follow-ups. However, it appears at Galileu magazine (here’s a Google translation).
I posted a question asking Jim about his PHD – oddly it was deleted and I was banned.
Whatever happened to innocent until proven guilty? I hope your kids are tempted and abused by people in chatrooms, maybe then you’d listen more objectively.
They have said when they are going to demo the technology, they already have links with the police and its not their fault if uk charities are too proud to accept outside assistance.
I’m not saying I trust everything they say on chatnannies but even so, at least they are trying to do something instead of moaning on about the subject like you sad wankers are.
“Sceptinator”: Whether you are Jim Wightman or not (and I fear you might be), please notice the lack of interest on this topic in recent days. The reason for this: through the questioning and investigation you will see above, the Nanniebot concept, Jim Wightman and ChatNannies have all been thoroughly debunked. There is no more debunking to do here. The debunking is complete.
BTW, I love the “I hope your kids are tempted and abused by people in chatrooms, maybe then you’d listen more objectively.” comment. This shows that you are a) some sort of bizarre sadist headcase, and b) that you do not know what objectively means, as a person in that particular situation would be very much looking at the whole thing rather subjectively.
I would suggest to you that you consider the point that false hope and false remedies do more damage than good in these cases since people’s time and attention are diverted away from options that might genuinely make a difference.
Ho-hum…
Trying to do something?
Spin a line of bullshit? – it’s quite clear Jim is a headcase.
What a vile suggestion you make – wanting people’s children to be abused.
Yes, it reminds me of the Caucasian Chalk Circle. Whether it’s Wightman or not, anyone who hopes for children to be abused, just to be proved right, is not a trustworthy source of opinion about the the protection of children.
Mr. Wightman should be careful; we’re issuing fraud indictments for this kind of stuff in America.
And you, Mr. Williams, should also be careful – you have publically claimed a number of times that our technology and our site is a hoax and that we’re defrauding people…
…we (thats ChatNannies) are issuing libel suits against people in this country that make these claims; perhaps its time to start on the USA?
Just because you don’t understand how we can do something so much better than you can, Williams, despite you claiming you have ‘had enough’ of the debate, doesn’t mean it can’t be done. Sorry to put your nose out of joint by just being _intelligent_ rather than _academically trained in AI_ and sorry you’ve devoted considerable time to researching and working with AI in _completely the wrong direction_.
Perhaps if you spent less time polluting the internet with your nauseous bilge and more time _learning_ and _studying_ and being open to alternate possibilities you wouldn’t be the anal retentive your online presence suggests. University doesn’t make you wise, my young padawan learner. You should remember that.
See? There _can_ be fun in the truth, it doesn’t all have to be spiteful vitriol lacking in insight.
P.S. I find great amusement in your claims of ‘completely debunking’ our work – perhaps to yourselves you’ve proven that 2 + 2 = 5…but to anyone with even a residue of a braincell, you’ve proven nothing – in the meantime we’ll wait for the Loebner prize to prove us right OFFICIALLY.
Jim – who are you kidding? No one is interested. No one believes you. You’ve never responded to any technical question satisfactorily (in fact you actively delete the ones on your forum you don’t know how to talk around).
Now you’re invented lawsuits. You seem to be stuck in a rut on this one and its starting to look a lot like self harm. Not all attention is good you know?
Apologies for the double post. I’d just like to point out to Jim that it is often impossible to prove something is not true. However, in the absence of proof in the positive there does come a point where common sense rules. The Nanniebot idea overstepped that mark a while back now and anyone of any relevance is utterly disinterested now. I’m just sad enough to continue arguing with you! Sigh…
The guy is a bullshitter and clearly not right in the head. If you get program this great A.I. why is your site full of dead links?
And what’s this on your chatnonsense site:
06/04/2004 – Full New Scientist article
“We have been asked by New Scientist magazine to remove our reproduction of their article about us (even though it is about us) for copyright reasons.
Why do you link to the google archives rather than the updated “story” that New Scientist ran?
Well that is those silly arses at New Scientist’s fault for running a hoax story in the first place.
There once was a fella called Jim,
Who decided one day on a whim,
To proclaim from on high,
He’d created AI,
But actually his bot was just him.
I thank you…
BTW, the above is a ‘Jimerick’. I think we could start a competition. ;-P
Another retraction – the original BBC Chatnannies article now gets page not found.
The waxy dot org web of spies
Found out all of Jim Wightman’s past lies.
Exposed as a pyche
With faults in his psyche,
Conclusion: he’s got no AIs.
LOL thats pretty good 🙂
And equally as true as most of the rest of the fantasy comments on this page!
While I’m here…
Anon – I haven’t actually been asked any technical questions by anyone from this site. I have been asked ONE technical question on the ChatNannies dev forum, which I’m still trying to find time to reply to (don’t forget I have a real job too). In addition, I think you’ll find that I say I’m going to delete posts from the forum if they are pointless personal attacks – and as far as I recall, I’ve only deleted one from Charles Knight, whose post was akin to “The guy is a bullshitter and clearly not right in the head.”, which even you lot should agree isn’t a technical question 😉
If I could get a rational answer from you for once, exactly how did you expect me to instantly ‘prove’ the quality of our AI to the _whole world_ the second the New Scientist article was published? For the record, as I’ve previously stated elsewhere, the New Scientist journo was offered a complete independent test of the AI in any form he wanted – he declined.
I’ve been nothing but open about this; ok I’ve refused to share the source code at this point with plebs like Williams, but I have said that it will be released after the Loebner prize! What more can I do? Do you want me to quit my job and travel the world in an RV proving the AI to one town at a time? Or would you rather it is proven in one place at one time by people that are truly qualified to judge?
I wouldn’t expect people to believe this without proof but I saw the fact that the Police were involved as the beginning of enough proof to satisfy the hordes until the Loebner prize. I was wrong, of course.
It doesn’t alter the fact of course that you all have gone completely overboard in rubbishing me personally, instead of asking intelligent questions about the AI which I could answer to your satisfaction. So I repeat – you have proven nothing and debunked nothing, except to yourselves. The rest of the patient world can wait until we go properly public, and discover how true our claims are.
No, Charles actually asked, a question which you’ve repeatedly refused to address, about the PhD you claimed on your RentACoder CV:
“Jim Wightman BSc PhD … Along the way I picked up a PhD too”.
He said he’d checked the British Library, where PhD theses are indexed, and found no record of one under your name.
This thread is about to hit 150 comments and, while still entertaining, isn’t going anywhere so I’m closing comments. If you have any specific news, articles, or information about this topic, e-mail or IM me. (My contact information is in the left-hand sidebar.)
If it’s relevant, I’ll post an update. And when ChatNannies was conclusively debunked or proven, I’ll post a new entry about it. Thanks for participating, everyone.