Thursday 12 November 2015

Gattaca of the Mind

Hi guys, welcome to Orygyn!

I'm responding in this post to one of my favourite YouTubers, Jim. Or as you might know him, noelplum99. He released a video on Monday called "Genetically Enhanced Intelligence - Major Concerns".

Near the end of the video, he cites an issue that could arise out of trying to ban the genetic manipulation of people pre-birth to make them more intelligent. This is as follows:

If one country bans the technology, there will be another that won't. That country will advance the intelligence of their population gaining an enormous intellectual, and by extension, creative and economic, advantage over their competition. The rest of the world will have no choice but to adopt the technology to keep up. Jim asks us what we think of this.

I responded in short form on his video. The comment is as follows:

"I love making videos about the future. I love it even more when my favourite YTers do :)

As to the economic superintelligent nations question, I'd probably need to explore it in a blog post or video, but my initial reaction would be that the genetic advances being made here wouldn't happen in a vacuum. They would occur side-by-side along with advances in computing and manufacturing. There are any number of ways it could go, but we could reduce the marginal cost of food production to near-zero, much like we've already done with most human knowledge, colonise (sic) other planets, upload our minds to a cloud, eliminate our need for food and water, or kill ourselves before any of this happens, or any logical combination of those things, all of which lessen or eliminate the threat or concerns posed by a superintelligent nation. The author, Martin Ford, argued for a basic income to solve the problem you identified in your last video about AI: the problem of what to do for money when all the jobs are automated, and this too could affect the fear factor of a "nootropic China"."

I have made some points already in this comment, but with this post, I'd like to contest the premise. I don't think we will face this issue. Not because I think we'll be destroyed before we get there, although it is a possibility. Not because I think "we will never be able to tamper with something as complex and powerful and SACRED as the human brain, nor should we". It is in fact because we will do exactly that, but under a different method, at a different pace, and for a (partially) different reason.

What Jim is getting at is essentially the plot of the film "Gattaca", without question my all-time favourite film, if it focused only on the brain. It's possible I'm misinterpreting, but from the way Jim explains it, it sounds like he envisions that intelligence will increase slowly. He says that we could only afford to be "one generation behind" the country that makes the first move. This is, of course, about genetic modification, and it is envisioned to be done to avoid falling behind.

GM brains. One generation. Don't fall behind. Circumstances, pace, and reason respectively. I would like to contest all 3. Here are mine.

Avatars. Doubling (potentially) each year and getting faster. Understand that which baseline humans could never understand, and to not fall behind AI.

Genetic modification is a very slow way of increasing intelligence. You modify the genome pre-birth and then wait for further advances. Dmitri Itkov of the 2045 Initiative has an alternative in the works. His Avatar program is intended to give humans a more durable, energy efficient, and all-round more capable body. We can use less resources, live much longer, phase out the need for transport, simplify how we gain energy, scale up our intelligence by orders of magnitude with each software upgrade, instead of slowly across generations, and, if there's something about being human you just can't live without, adding VR elements to it will make those things possible, and better if you like, while still inhabiting the avatar. That's also pace taken care of.

Whether it will pan out on that time scale is the big question. Ray Kurzweil thinks it will, obviously. Michio Kaku has doubts (around the 40 min mark). Of course, no-one really knows. I take Ray's view simply because it's possible, it's optimistic, and I'm also obsessed with understanding things.

What about the reason why? Well, think of what AI can do today. Watson beat Ken Jennings at Jeopardy 4 years ago. I won't bombard you with more links here as they're all available on YouTube, but we have the Google cars and Tesla's competing Autopilot system, a facial recognition algorithm that is better than humans, an algorithm that can describe a scene, Asimo, BigDog, Siri, Google Now and Cortana. This is what we have now and we had nothing close 10 years ago. This is the beauty of Moore's law and the law of accelerating returns. We do have to assume the trend continues, and there are major stumbling blocks up ahead, but, as ever, I am optimistic. When I enter middle-age, I will be greeted by my robot equals and then superseded by them. In order to compete, I, and everyone else, will need to upgrade, and at that point, we better hope Dmitri Itkov didn't drag his feet. If he didn't, and we vastly increase our intelligence, imagine what we could achieve. Imagine what we could understand, that we couldn't hope to understand now.

Maybe I'm getting carried away with myself and it's all too good to be true. Maybe you're just a pessimist that watches too much news about ISIS and school massacres and pines for how things used to be (colonial, at war, poorer, sicker, more racist, sexist and homophobic and oblivious to the existence of transgender people, with no welfare state, unquestioning respect for authority, god-fearing literally, maybe a bit more social but that's it really). For the foreseeable future, time will only flow in 1 direction: forward. I choose to embrace it.

8<{D-

No comments:

Post a Comment