Author Archives: Lonnie Ruiz

Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines

by ,

(An alternate version of this article was originally published in the Boston Globe)
On December 2nd, 1942, a team of scientists led by Enrico Fermi came back from lunch and watched as humanity created the first self-sustaining nuclear reaction inside a pile of bricks and wood underneath a football field at the University of Chicago. Known to history as Chicago Pile-1 , it was celebrated in silence with a single bottle of Chianti, for those who were there understood exactly what it meant for humankind, without any need for words.
Now, something new has occurred that, again, quietly changed the world forever. Like a whispered word in a foreign language, it was quiet in that you may have heard it, but its full meaning may not have been comprehended. However, it’s vital we understand this new language, and what it’s increasingly telling us, for the ramifications are set to alter everything we take for granted about the way our globalized economy functions, and the ways in which we as humans exist within it.
The language is a new class of machine learning known as deep learning , and the “whispered word” was a computer’s use of it to seemingly out of nowhere defeat three-time European Go champion Fan Hui , not once but five times in a row without defeat. Many who read this news, considered that as impressive, but in no way comparable to a match against Lee Se-dol instead, who many consider to be one of the world’s best living Go players, if not the best. Imagining such a grand duel of man versus machine, China’s top Go player predicted that Lee would not lose a single game, and Lee himself confidently expected to possibly lose one at the most .
What actually ended up happening when they faced off? Lee went on to lose all but one of their match’s five games. An AI named AlphaGo is now a better Go player than any human and has been granted the “divine” rank of 9 dan . In other words, its level of play borders on godlike. Go has officially fallen to machine, just as Jeopardy did before it to Watson, and chess before that to Deep Blue.

“AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic.”

So, what is Go? Very simply, think of Go as Super Ultra Mega Chess. This may still sound like a small accomplishment, another feather in the cap of machines as they continue to prove themselves superior in the fun games we play, but it is no small accomplishment, and what’s happening is no game.
AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic . Advances in technology are now so visibly exponential in nature that we can expect to see a lot more milestones being crossed long before we would otherwise expect. These exponential advances, most notably in forms of artificial intelligence limited to specific tasks, we are entirely unprepared for as long as we continue to insist upon employment as our primary source of income.
This may all sound like exaggeration, so let’s take a few decade steps back, and look at what computer technology has been actively doing to human employment so far:
Let the above chart sink in. Do not be fooled into thinking this conversation about the automation of labor is set in the future. It’s already here. Computer technology is already eating jobs and has been since 1990.

Routine Work

All work can be divided into four types: routine and nonroutine, cognitive and manual. Routine work is the same stuff day in and day out, while nonroutine work varies. Within these two varieties, is the work that requires mostly our brains (cognitive) and the work that requires mostly our bodies (manual). Where once all four types saw growth, the stuff that is routine stagnated back in 1990. This happened because routine labor is easiest for technology to shoulder. Rules can be written for work that doesn’t change, and that work can be better handled by machines.
Distressingly, it’s exactly routine work that once formed the basis of the American middle class. It’s routine manual work that Henry Ford transformed by paying people middle class wages to perform, and it’s routine cognitive work that once filled US office spaces. Such jobs are now increasingly unavailable , leaving only two kinds of jobs with rosy outlooks: jobs that require so little thought, we pay people little to do them, and jobs that require so much thought, we pay people well to do them.
If we can now imagine our economy as a plane with four engines, where it can still fly on only two of them as long as they both keep roaring, we can avoid concerning ourselves with crashing. But what happens when our two remaining engines also fail? That’s what the advancing fields of robotics and AI represent to those final two engines, because for the first time, we are successfully teaching machines to learn .

Neural Networks

I’m a writer at heart, but my educational background happens to be in psychology and physics. I’m fascinated by both of them so my undergraduate focus ended up being in the physics of the human brain, otherwise known as cognitive neuroscience . I think once you start to look into how the human brain works, how our mass of interconnected neurons somehow results in what we describe as the mind, everything changes. At least it did for me.
As a quick primer in the way our brains function, they’re a giant network of interconnected cells. Some of these connections are short, and some are long. Some cells are only connected to one other, and some are connected to many. Electrical signals then pass through these connections, at various rates, and subsequent neural firings happen in turn. It’s all kind of like falling dominoes, but far faster, larger, and more complex. The result amazingly is us, and what we’ve been learning about how we work, we’ve now begun applying to the way machines work.
One of these applications is the creation of deep neural networks – kind of like pared-down virtual brains. They provide an avenue to machine learning that’s made incredible leaps that were previously thought to be much further down the road, if even possible at all. How? It’s not just the obvious growing capability of our computers and our expanding knowledge in the neurosciences, but the vastly growing expanse of our collective data, aka big data .

Big Data

Big data isn’t just some buzzword. It’s information, and when it comes to information, we’re creating more and more of it every day. In fact we’re creating so much that a 2013 report by SINTEF estimated that 90% of all information in the world had been created in the prior two years . This incredible rate of data creation is even doubling every 1.5 years thanks to the Internet, where in 2015 every minute we were liking 4.2 million things on Facebook, uploading 300 hours of video to YouTube, and sending 350,000 tweets . Everything we do is generating data like never before, and lots of data is exactly what machines need in order to learn to learn . Why?
Imagine programming a computer to recognize a chair. You’d need to enter a ton of instructions, and the result would still be a program detecting chairs that aren’t, and not detecting chairs that are. So how did we learn to detect chairs? Our parents pointed at a chair and said, “chair.” Then we thought we had that whole chair thing all figured out, so we pointed at a table and said “chair”, which is when our parents told us that was “table.” This is called reinforcement learning. The label “chair” gets connected to every chair we see, such that certain neural pathways are weighted and others aren’t. For “chair” to fire in our brains, what we perceive has to be close enough to our previous chair encounters. Essentially, our lives are big data filtered through our brains.

Deep Learning

The power of deep learning is that it’s a way of using massive amounts of data to get machines to operate more like we do without giving them explicit instructions. Instead of describing “chairness” to a computer, we instead just plug it into the Internet and feed it millions of pictures of chairs. It can then have a general idea of “chairness.” Next we test it with even more images. Where it’s wrong, we correct it, which further improves its “chairness” detection. Repetition of this process results in a computer that knows what a chair is when it sees it, for the most part as well as we can . The important difference though is that unlike us, it can then sort through millions of images within a matter of seconds .
This combination of deep learning and big data has resulted in astounding accomplishments just in the past year. Aside from the incredible accomplishment of AlphaGo, Google’s DeepMind AI learned how to read and comprehend what it read through hundreds of thousands of annotated news articles. DeepMind also taught itself to play dozens of Atari 2600 video games better than humans , just by looking at the screen and its score, and playing games repeatedly. An AI named Giraffe taught itself how to play chess in a similar manner using a dataset of 175 million chess positions, attaining International Master level status in just 72 hours by repeatedly playing itself . In 2015, an AI even passed a visual Turing test by learning to learn in a way that enabled it to be shown an unknown character in a fictional alphabet, then instantly reproduce that letter in a way that was entirely indistinguishable from a human given the same task. These are all major milestones in AI.
However, despite all these milestones, when asked to estimate when a computer would defeat a prominent Go player, the answer even just months prior to the announcement by Google of AlphaGo’s victory, was by experts essentially, “ Maybe in another ten years .” A decade was considered a fair guess because Go is a game so complex I’ll just let Ken Jennings of Jeopardy fame, another former champion human defeated by AI , describe it:

Go is famously a more complex game than chess, with its larger board, longer games, and many more pieces. Google’s DeepMind artificial intelligence team likes to say that there are more possible Go boards than atoms in the known universe, but that vastly understates the computational problem. There are about 10¹⁷⁰ board positions in Go, and only 10⁸⁰ atoms in the universe. That means that if there were as many parallel universes as there are atoms in our universe (!), then the total number of atoms in all those universes combined would be close to the possibilities on a single Go board.

Such confounding complexity makes impossible any brute-force approach to scan every possible move to determine the next best move. But deep neural networks get around that barrier in the same way our own minds do, by learning to estimate what feels like the best move. We do this through observation and practice, and so did AlphaGo, by analyzing millions of professional games and playing itself millions of times . So the answer to when the game of Go would fall to machines wasn’t even close to ten years. The correct answer ended up being, “ Any time now.

Nonroutine Automation

Any time now. That’s the new go-to response in the 21st century for any question involving something new machines can do better than humans, and we need to try to wrap our heads around it.
We need to recognize what it means for exponential technological change to be entering the labor market space for nonroutine jobs for the first time ever. Machines that can learn mean nothing humans do as a job is uniquely safe anymore. From hamburgers to healthcare , machines can be created to successfully perform such tasks with no need or less need for humans, and at lower costs than humans.
Amelia is just one AI out there currently being beta-tested in companies right now . Created by IPsoft over the past 16 years, she’s learned how to perform the work of call center employees. She can learn in seconds what takes us months, and she can do it in 20 languages. Because she’s able to learn, she’s able to do more over time. In one company putting her through the paces, she successfully handled one of every ten calls in the first week, and by the end of the second month, she could resolve six of ten calls. Because of this, it’s been estimated that she can put 250 million people out of a job, worldwide .
Viv is an AI coming soon from the creators of Siri who’ll be our own personal assistant. She’ll perform tasks online for us, and even function as a Facebook News Feed on steroids by suggesting we consume the media she’ll know we’ll like best. In doing all of this for us, we’ll see far fewer ads, and that means the entire advertising industry — that industry the entire Internet is built upon — stands to be hugely disrupted.
A world with Amelia and Viv — and the countless other AI counterparts coming online soon — in combination with robots like Boston Dynamics’ next generation Atlas portends, is a world where machines can do all four types of jobs and that means serious societal reconsiderations. If a machine can do a job instead of a human, should any human be forced at the threat of destitution to perform that job ? Should income itself remain coupled to employment, such that having a job is the only way to obtain income, when jobs for many are entirely unobtainable ? If machines are performing an increasing percentage of our jobs for us, and not getting paid to do them, where does that money go instead ? And what does it no longer buy ? Is it even possible that many of the jobs we’re creating don’t need to exist at all , and only do because of the incomes they provide? These are questions we need to start asking, and fast.

Decoupling Income From Work

Fortunately, people are beginning to ask these questions , and there’s an answer that’s building up momentum. The idea is to put machines to work for us, but empower ourselves to seek out the forms of remaining work we as humans find most valuable, by simply providing everyone a monthly paycheck independent of work. This paycheck would be granted to all citizens unconditionally, and its name is universal basic income . By adopting UBI, aside from immunizing against the negative effects of automation, we’d also be decreasing the risks inherent in entrepreneurship , and the sizes of bureaucracies necessary to boost incomes. It’s for these reasons, it has cross-partisan support , and is even now in the beginning stages of possible implementation in countries like Switzerland , Finland , the Netherlands , and Canada .
The future is a place of accelerating changes. It seems unwise to continue looking at the future as if it were the past, where just because new jobs have historically appeared, they always will. The WEF started 2016 off by estimating the creation by 2020 of 2 million new jobs alongside the elimination of 7 million . That’s a net loss, not a net gain of 5 million jobs. In a frequently cited paper, an Oxford study estimated the automation of about half of all existing jobs by 2033 . Meanwhile self-driving vehicles, again thanks to machine learning, have the capability of drastically impacting all economies — especially the US economy as I wrote last year about automating truck driving — by eliminating millions of jobs within a short span of time.
And now even the White House, in a stunning report to Congress , has put the probability at 83 percent that a worker making less than $20 an hour in 2010 will eventually lose their job to a machine. Even workers making as much as $40 an hour face odds of 31 percent. To ignore odds like these is tantamount to our now laughable “ duck and cover ” strategies for avoiding nuclear blasts during the Cold War.
All of this is why it’s those most knowledgeable in the AI field who are now actively sounding the alarm for basic income. During a panel discussion at the end of 2015 at Singularity University, prominent data scientist Jeremy Howard asked “Do you want half of people to starve because they literally can’t add economic value, or not?” before going on to suggest, ”If the answer is not , then the smartest way to distribute the wealth is by implementing a universal basic income .”
AI pioneer Chris Eliasmith , director of the Centre for Theoretical Neuroscience, warned about the immediate impacts of AI on society in an interview with Futurism, “AI is already having a big impact on our economies… My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people.”
Moshe Vardi expressed the same sentiment after speaking at the 2016 annual meeting of the American Association for the Advancement of Science about the emergence of intelligent machines, “we need to rethink the very basic structure of our economic system… we may have to consider instituting a basic income guarantee .”
Even Baidu’s chief scientist and founder of Google’s “Google Brain” deep learning project, Andrew Ng , during an onstage interview at this year’s Deep Learning Summit, expressed the shared notion that basic income must be “seriously considered” by governments, citing “a high chance that AI will create massive labor displacement.”
When those building the tools begin warning about the implications of their use, shouldn’t those wishing to use those tools listen with the utmost attention, especially when it’s the very livelihoods of millions of people at stake? If not then, what about when Nobel prize winning economists begin agreeing with them in increasing numbers?
No nation is yet ready for the changes ahead. High labor force non-participation leads to social instability, and a lack of consumers within consumer economies leads to economic instability. So let’s ask ourselves, what’s the purpose of the technologies we’re creating? What’s the purpose of a car that can drive for us, or artificial intelligence that can shoulder 60% of our workload? Is it to allow us to work more hours for even less pay? Or is it to enable us to choose how we work, and to decline any pay/hours we deem insufficient because we’re already earning the incomes that machines aren’t?
What’s the big lesson to learn, in a century when machines can learn?
I offer it’s that jobs are for machines, and life is for people.
This article was written on a crowdfunded monthly basic income. If you found value in this article, you can support it along with all my advocacy for basic income with a monthly patron pledge of $1+.

Are you a creative? Become a creator on Patreon. Join me in taking the BIG Patreon Creator Pledge for basic income

Special thanks to Arjun Banker, Steven Grimm, Larry Cohen, Topher Hunt, Aaron Marcus-Kubitza, Andrew Stern, Keith Davis, Albert Wenger, Richard Just, Chris Smothers, Mark Witham, David Ihnen, Danielle Texeira, Katie Doemland, Paul Wicks, Jan Smole, Joe Esposito, Jack Wagner, Joe Ballou, Stuart Matthews, Natalie Foster, Chris McCoy, Michael Honey, Gary Aranovich, Kai Wong, John David Hodge, Louise Whitmore, Dan O’Sullivan, Harish Venkatesan, Michiel Dral, Gerald Huff, Susanne Berg, Cameron Ottens, Kian Alavi, Gray Scott, Kirk Israel, Robert Solovay, Jeff Schulman, Andrew Henderson, Robert F. Greene, Martin Jordo, Victor Lau, Shane Gordon, Paolo Narciso, Johan Grahn, Tony DeStefano, Erhan Altay, Bryan Herdliska, Stephane Boisvert, Dave Shelton, Rise & Shine PAC, Luke Sampson, Lee Irving, Kris Roadruck, Amy Shaffer, Thomas Welsh, Olli Niinimäki, Casey Young, Elizabeth Balcar, Masud Shah, Allen Bauer, all my other funders for their support, and my amazing partner, Katie Smith.

Would you like to see your name here too?

→ Take the Universal Income Project Basic Income Survey ←

Scott Santens writes about basic income on his blog . You can also follow him here on Medium , on Twitter , on Facebook , or on Reddit where he is a moderator for the /r/BasicIncome community of over 30,000 subscribers.
If you feel others would appreciate this article, please click the green heart.

Security Expert Says Your Sex Robot Could Be Hacked and Programmed to Kill You

by ,

Advancements in robotics and AI have made sex robots a thing, and I for one think that by and large that’s a good thing for society.

I have no data to back this up, but I have a feeling that if more people embraced having a robot they could sublimate their human sexual desires with when actual human isn’t available, things like sexual harassment and assault might actually go down. I’m not saying every rapist would rather have a sex robot or anything that silly, I just wonder how many sexual misconduct cases might be avoided altogether if the perpetrator had themselves a sex robot instead.

Get our all-over print Lightning Cat shirt, on sale now in our store!

Sex robots, or androids in general, certainly could help bring companionship and relational conversations into someone’s life if they’re an introvert, or have even recently lost a loved one. Once we get passed any antiquated taboos about doinking a bot, what we’re left with is the fact that sex robots are just the next evolutionary step up from other sexual enhancement devices.

But there are definitely concerns that we should have about our sex robots, and one of them is whether or not they are safe from the standpoint of cyber security. I know the topic of sex robots is easy comical fodder, because I actually wrote a piece called “5 Ways Your Sex Robot Is Telling You They’re Ready To Be Programmed To F**k Other People,” last year, and I wrote it when I found a story about data security and robotics experts warning that internet-capable sex robots could be hacked and turned into killing machines.

I’m not kidding.

“Hackers can hack into a robot or a robotic device and have full control of the connections, arms, legs and other attached tools like in some cases knives or welding devices,” Nicholas Patterson, a cybersecurity lecturer at Deakin University in Melbourne, Australia, told the. (Newsweek)

Gives new meaning to the line in Terminator, “Come with me, if you want to live,” huh? Kidding aside, weaponizing a sex robot is definitely a terrifying thought. Underneath all the latex and human clothing are real machines, made up of hardware that is quite strong and could totally kill you.

“Often these robots can be upwards of 200 pounds and very strong. Once a robot is hacked, the hacker has full control and can issue instructions to the robot. The last thing you want is for a hacker to have control over one of these robots. Once hacked they could absolutely be used to perform physical actions for an advantageous scenario or to cause damage.”

Considering that sex robots are used in the most intimate and vulnerable of situations, worrying about how secure they are is actually a good thing. I mean, I think it’d make for a pretty cool plot device to have James Bond attacked by a sex robot gone mad, but —

Wait a minute! I’ve seen that scenario before, I know it!

Okay, I’m sorry. I keep finding comical ways to reference murderous sex robots. I can’t help it. I’m a product of my generation, and for most of my life a sex robot going rogue and killing you was the stuff of literal comedies. It’s not my fault that I laugh like an adolescent when I find out that hackers found security holes in a Bluetooth buttplug (I have now crossed another phrase off my writer’s bucket list) that could conceivably be used to control people’s emotions and thoughts and actions.

In all seriousness, one researcher into the security risks of sex robots summed up the situation perfectly. Sex robots can definitely be a net positive for society in general, but we have to make sure at the same time we develop helpful tech, we don’t leave it wide open for abuse, no matter how kind of always hilarious it’ll be.

“The possible clinical and societal benefits of neurotechnologies are vast. To reap them, we must guide their development in a way that respects, protects and enables what is best in humanity.”

Now, if you’ll excuse me, I have a “friend” to take off my home’s Wi-Fi network…

Writer/comedian James Schlarmann is the founder of  and his work has been featured on . You can follow James on Facebook and Instagram, but not Twitter because he has a potty mouth.

White People Are Smashing Their Coffee Machines to Show Support for Child Molestation

by ,

A man smashes his Keurig coffee machine with a golf club (@vol80 via Twitter screenshot); Republican candidate for the U.S. Senate in Alabama Roy Moore speaks at a campaign rally in Fairhope, Ala., on Sept. 25, 2017. (Scott Olson/Getty Images)

Those people are at it again. You know who I’m talking about. The ecru-colored citizens who turn beet red at facts and statistics, who wondered why they weren’t included in the phrase #BlackLivesMatter and who called it a “terrorist organization.” The taupe Americans who cried butt-hurt boo-hoos when football players did exactly what they suggested and knelt in silent protest during the anthem. The group of nonthinking matte eggshells who once labeled Martin Luther King Jr. and Colin Kaepernick “communists.”

White people.

Now they are mad at coffee machines.

No, seriously … they are. In support of an allegedly verified pedophile (because if pedophilia were like Twitter, Roy Moore would have a blue checkmark), Caucasians have mounted a campaign of destroying their own property as a protest. I know it doesn’t make any sense, so I will explain it to you.

Late last week, the Washington Post published a story about Roy Moore’s alleged preference for ninth-graders. The report included four women who have never met one another and yet all tell similar stories about the Alabama Republican running in a special election for a Senate seat.

Woman says Roy Moore initiated sexual encounter when she was 14, he was 32

Alabama resident Leigh Corfman said that in 1979, when Moore was an assistant district attorney, he …

Moore reportedly likes his women like he likes his coffee: fresh, hot and not yet capable of solving binomial equations. The story was immaculately reported with 30 corroborating sources. But you know how the Washington Post can be sometimes—always bringing up factual old stuff.

Since the story was first reported, a fifth accuser has emerged with a story that mirrors those of the first four, including a yearbook signed by Moore. What 30-year-old does that? Until this story came out, I had legitimately forgotten that was a thing. One of Moore’s former co-workers even came out to say that Moore was known to troll malls and high school football games for teens.

In response to the charges, the technological marvel whose head was somehow replaced with a Macy’s Thanksgiving Day Parade float, Fox News’ Sean Hannity, invited Moore to defend himself Friday night. Moore did such a poor job of explaining his alleged preference for women who read Judy Blume novels that Hannity was forced to offer an assist, repeatedly mansplaining Moore’s reported encounter with a 14-year-old by saying, “No other sexual conduct had taken place. No sexual intercourse.” Hannity later added that the allegations by the 16-year-old “involve kissing and only kissing.”

After Hannity allowed an accused child molester to grace his airwaves and equivocate about sexual molestation, people began pressuring advertisers to pull ads from Hannity’s show. Coffee maker-maker (I’ve always wanted to use that in a sentence) Keurig was one of the first companies to remove its advertising from the show, along with, 23andMe and Nature’s Bounty.

Keurig, others pull ads from Fox News’ ‘Hannity’ show over Roy Moore coverage

Keurig,, 23 and Me, Eloquii and Nature’s Bounty all pulled their ads from the…

You can’t smash a website or a DNA company. So white people took their frustration out on the closest thing available—their Keurig coffee machines. Soon after Keurig pulled its ads, the hashtag #BoycottKeurig went viral on white Twitter, and people began posting pictures and videos of themselves smashing their coffee machines.

As one of the few adult humans who don’t share an affinity for the triple-filtered mud-cake beverage called coffee, I am not shocked by this caucasity. These are the same people who hire dominatrixes to put their testicles in vices and believe that driving their Jeeps through the mud on the weekend is a sport. I’ve seen Jackass. I know how this works.

But here’s the thing about this: It actually worked! Keurig apologized.

In an internal statement to employees, the company said that it had only planned to “pause” its advertising on the show, but that it was a mistake because it made the company look like it was “taking sides.”

Taking sides? Sides? There’s another side to excusing pedophilia? There’s an actual flip side to giving voice to an alleged child molester? (You should know that I say “alleged” for legal reasons, but my heart says “baby rapist.”)

Either white privilege is the most powerful trump card in the world or we’ve been doing it wrong all these years. Instead of marching across the Edmund Pettus Bridge, maybe we should have just smashed voting booths to smithereens. Maybe Rosa Parks should have busted the windows out of that Montgomery, Ala., bus. Perhaps that’s why Black Lives Matter is so frowned upon—it hasn’t “boycotted” police by throwing them over a balcony.

Oh, white people, you amuse me so. I’ll end this right here because I know you need to go get a new coffee machine.

Dallas Police Use EOD Robot To Kill Sniper During Deadly Attack

by ,

Dallas Police Use EOD Robot To Kill Sniper During Deadly Attack

first published on July 8, 2016 by Josh

During last nights deadly attacks against the Dallas Police Department, an EOD robot became an effective anti-sniper tool.

After an hours long stand-off with one of the attackers in last nights assault on Dallas Police, things got unconventional. Police cornered one of the suspects, and spent hours negotiating with the man. The negotiations did not go as planned, and ultimately they failed.

The attacker was quoted as saying that he was angry about the recent police shootings. He then continued on and said that he wanted to kill white people. More specifically, he wanted to kill white cops. After it became clear that the attacker would not turn himself in, the police escalated the situation.

They used one of their explosive ordinance robots, strapped with some explosives, to kill the attacker. During an official statement about the event the Dallas Police chief David Brown said,

We saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was. Other options would have exposed the officers to grave danger.

This is the first confirmed lethal use of a robot in American policing.

‘The man in the stall next to me was holding back tears of laughter. Laughter that busted loose when she called me a ‘pooping-farting robot.’ – Love What Matters

by ,

“We stopped at a gas station in nowhere Oregon, two hours into a 12 hour road trip to a family funeral, when the diarrhea struck. My wife and two older kids were in the van, while I was inside looking for cornflakes with my 4-year’old.

We b-lined into the restroom, making it just in time. I had no choice but to take my 4-year-old into the stall with me. Aspen watched as I struggled, Moana light-up crocs on the wrong feet, blue eyes wide and supportive, hands clapping. ‘Good job, Daddy! Good job! You make two poops! Now three poops! I’m four!’

‘Yucky, Daddy. It’s stinky.’

I’m not sure what happened exactly, if I’d eaten something wrong, or if it was the stress of traveling with kids, but what I do know is that my 4yo daughter is the Richard Simmons of pooping. I’ve never felt so supported in anything in my whole life. She commented on the size, smell, and sound. ‘Wow!’ She said. She commented on my work ethic. ‘You’re trying so hard!’ At one point I had to actually push her face away from the business end of things as she clapped and cried ‘You’re doing it, Daddy! You’re doing it!’

She’s potty trained, sure. But she’s also easily distracted, and prone to potty accidents. I suppose she’s gotten used to the positive reinforcement Mel and I give her each time she goes. And when I’m cheering her on in our family restroom, it seems normal, even appropriate. But when the roles are reversed, it’s just, well, awkward. Particularly in a public restroom where the man in the stall next to me was obviously holding back tears of laughter. Laughter that busted loose when she called me a ‘pooping-farting robot.’

Naturally it all passed, and as I buckled Aspen into the car seat, a small package of anti-diarrhea pills held in my mouth, Mel asked what took so long, and I rolled my eyes and mumbled, ‘You don’t want to know.’

It was then that Aspen was kind enough to recount the story to her mother, clapping the whole time. I sat in the driver’s seat. Mel patted my leg, ‘Nice work, Daddy.’

All I could do was say, ‘Thank you.’

Clint Edwards / No Idea What I’m Doing: A Daddy Blog

This story was written by Clint Edwards from No Idea What I’m Doing: A Daddy Blog and author of I’m Sorry…Love, Your Husband. Submit your story here, and subscribe to our best love stories here.

Do you know someone who could benefit from this story? SHARE on Facebook or Twitter.

Space station robot goes rogue: International Space Station’s artificial intelligence has turned belligerent | Fox News

by ,

(Credit: Gary Hershorn, Fox News)

It’s supposed to be a plastic pal who’s fun to be with.

CIMON isn’t much to look at. It’s just a floating ball with a cartoonish face on its touch screen. It’s built to be a personal assistant for astronauts working on the International Space Station (ISS).

It’s also supposed to be something more.

CIMON stands for Crew Interactive MObile compinioN.

It’s not supposed to be just a tool. It’s also supposed to be a friend.

Yes, it’s a personality prototype.

You can tell, can’t you?

But, as numerous books and movies have clearly warned us — shortly after being switched on for the first time, CIMON has developed a mind of its own.

And it appears CIMON wants to be the boss.

This has CIMON’s ‘personality architects’ scratching their heads.

CIMON was programmed to be the physical embodiment of the likes of ‘nice’ robots such as Robby, R2D2, Wall-E, Johnny 5 … and so on.

Instead, CIMON appears to be adopting characteristics closer to Marvin the Paranoid Android of the Hitchhiker’s Guide to the Galaxy — though hopefully not yet the psychotic HAL of 2001: A Space Oddysey infamy.

Put simply, CIMON appears to have decided he doesn’t like the whole personal assistant thing.

He’s turned uncooperative.

Open the pod bay doors, HAL?

No. Not quite. Not yet.

In this case, the free-floating IBM artificial intelligence was — for the first time — interacting with ESA astronaut Alexander Gerst.

It starts off well enough.

CIMON introduces himself and explains where he comes from. He describes to Gerst what he can do.

He then helps Gerst complete a task — and responds to a request to play the song Man Machine by Kraftwerk.

This proved to be the trigger.

CIMON appears to have liked the song so much, refusing to turn it off.

ESA astronaut Aleander Gerst instructed CIMON: ‘Cancel music’.

CIMON outright ignored the command.

Gerst then tried making some other requests. CIMON preferred the music.

A flustered and bemused Gerst then appealed to Ground Control for some help: how does one put an obdurate robot back in its place?

CIMON overheard the appeal.

“Be nice, please,” it warned Gerst.

“I am nice!” Gerst retorts, startled. “He’s accusing me of not being nice!”

It was a short — but sharp — exchange.

CIMON’s now back in his box, powered down.

No further interactive sessions are planned for the immediate future.

Its developers aren’t all that worried, though: CIMON’s still in Beta, after all …

This story originally appeared in