The Question of Balance in AI Advancement

With the Google I/O 2015 keynote having concluded a few minutes back, apart from all the wonderful revelations of the show, I found one common persistent aspect looming around like a shadow behind almost every new feature involved. And that was Machine Learning.

I was not surprised to see Sundar Pichai take a moment to explain about Deep Neural Networks and how the ones at Google extend by up to 30 levels. In fact, most of the rest of his speech was full of indications towards NLP and ML. Obviously, the innovations of the next millennium will be driven by AI.

All this advancement in AI is quite fascinating, but at the same time a bit scary. I recently watched the movie ‘Ex Machina’ which dealt with a multimillionaire genius scientist (Tony Stark – a more realistic one, yet not enough at that) living in recluse at a research facility, who has built a robot with strong AI, and brings in a young programmer at his company to help him find out if it succeeds at Turing’s Test. Well, things become much more complicated soon after and as you might have guessed, going by the trend of AI/Robotics movies (read I.Robot) things don’t end well.

The point that I wish to make here is how much reliant should we ‘optimally’ be on machine learning and AI? At what extent lies the right balance in usage of human intelligence and AI?

Unfortunately my questions become seemingly vague after that. It is just that I strongly feel that there should arrive a point of time where either AI developments forks off into a new branch and stop replacing human brain functionality, because whatever advancement has been achieved by that time is considered to be enough or as I prefer to say ‘optimal’.

I completely agree with all the research, and in fact, I’m personally eager to be at the forefront of Machine Learning and AI developments some day. I would be definitely thrilled to have the opportunity to interact with a robot that passes the Turing’s test. I’m only fearful of how much humankind can choose to become reliant on external intelligence.

After all, you can never truly trust external intelligence, whether it originates from a human brain, or a machine.

Hello World

Hello world?

I’ve always wondered why this was the first thing people write when trying out something new.

Much like every other question possibly related to code, someone had already asked this question on Stack Overflow.

Apparently the very first ‘Hello World’ program was written by Brian Kernighan as part of the documentation for I/O section of the BCPL programming language manual. This code was also used for testing of the C compiler and hence made its way into Kernighan and Dennis Ritchie’s book on C which was published in 1972. And again, later, it was also one of the first programs used to test Bjarne Stoustroup’s C++ compiler.

Ha! Brian Kernighan, Dennis Ritchie, and Bjarne Stoustroup! No wonder people have been using it ever since.

Anyway!

Hello World!