There's an article on nanniebot in New Scientist this week, which includes a transcript of a conversation with it. It's scarily good. I did manage to guess correctly, but I was a long way from certain. If I hadn't been told in advance that one was a bot, I wouldn't have known.
Harry
is this not a goofd thing
I know that thw Admin Mods and sysops here follow a strict code but we sure would not want anyone here that would engage with under age posters..
I am assuming that the software does not lead the respondent on, till it is too late...
we all have as adults the opportunity to bail out if we suspect something not quite right.... and if male ego prevents that maybe we should now consider it,....
Sorry just thinking anythink that protects kids is good.... as adults we rare here voluntarily and can always say no...
Gmanxxx
I don't think the SH chatroom has anything to worry about from Nanniebot itself - I expect the ops boot children fast enough that it's a waste of time for paedophiles. But the technology exists, it will only get better, and it will be used. We already don't know who we're talking to. Soon it'll be a question of what we're talking to.
From my point of view, that's OK. I don't visit chat rooms all that much, but when I do I go there to chat. If a conversation is entertaining, does it matter that it's a man, woman, or computer on the other end?
If you're there to meet people to meet, you'd care a lot more.
Hello DJohn,
The New Scientist was mentioned as the source of this information in the AOL report.
Whilst I, and I hope every member of Swinging Heaven, believes that all paedophiles should get their ' just deserts' under the full weight of the law. The article states over 100,000 of these bots have been sent out. Therefore a large number of chatrooms must have been visited on numerous occasions, possibly including ours.
Not being a computer whizz, I am wondering if there is any way these bots can be detected? We have an 'Adult Chatroom' which is, in the main, a very friendly place to visit. The rules and indeed the system itself automatically bans anyone entering the room with the words: boy, girl, and child in their nickname, but beyond that the SyOps have to use their own judgement as to the age of any particular room visitor.
We do not want genuine members being frightened off, by the thought that they are being watched by Big Brother.
Harry0
It's much more fun being on the winning side.
George Armstrong Custer.
If you knew the IP address of the machine(s) running the bots, the operator of a chat room could probably tell. Also, I suspect that current bots wouldn't stand up to a long enough interrogation by a suspicious op.
It's only a matter of time before good enough bots get into the hands of the general public. Then it will be impossible to tell. Judging from the transcripts of recent Loebner Prize contests, that will be a while. I don't know what Nanniebot is doing, and the author seems very reluctant to reveal any details of the technology.
After spending a couple of frantic weeks trawling through ads, postings etc I have NO doubts that at some point we are being tested by the powers who police... as long as Mods and SysOps do their stuff and get rid of the idiots then we are clear. Please bear with us all - we don't want the site closed down, Mark taken to court or raids to happen cos the forum/photo ads are allowed to degenerate into a free for all open to those who enjoy illegal/dangerous sexual practices!
I was talking to another Mod on the phone tonight about the responsiblities we have towards site users - very interesting it was too. This is free site, run by Mark with the help of a body of volunteers who give up some spare (can't believe I can spell that word!) time to keep this site responsible, safe and secure. If WE can't do it then nannniebots will.
I'm a bit skeptical about the feasibility of this. A few years ago I made a bot to host my IRC chatroom. He would greet people, was able to differentiate between newbies and people he "knew", responded positively to hugs, sulked if he didn't get any, could talk about football, music, booze, intervened in flame wars, disciplined persistent rule-breakers, insulted anyone who ASL'd him, he even sent pictures of himself to people when asked. The thing is, although he could hold a reasonable facsimile of a conversation for a while, he was utterly useless when people wanted to talk about things he didn't know about... he had to try and change the subject, and people soon realised that he was totally inflexible and tended to repeat himself. Some people thought he was just a bit stupid, but most soon realised he wasn't human. If the "nanniebots" are as successful as claimed, why haven't I read that the Turing prize has been claimed?
Ice
Ice...
I think you created my ex-husband.
How much time did you put into your bot, Ice Pie? A simple one is relatively easy to make, but easy to spot. If a sufficiently talented team puts enough work into it (years of full time work), the result with be a lot better.
A youth chatroom is a quite different environment to the Turing test. The other person doesn't know in advance that the bot isn't human. The topic of conversation can be quite restricted, and the bot can simply say "got to go. bye" if it gets lost.
In a Turing test, you're up against intelligent educated adults who know that one of the 'people' they're talking to is a computer. You can't run away. If you change the subject too much, you'll be found out.
Background knowledge is becoming less of a problem, with projects like Cyc. And Google really does know everything.