Eyes, JAPAN Blog > Is it Software 2.0, though?

Is it Software 2.0, though?

victor

この記事は1年以上前に書かれたもので、内容が古い可能性がありますのでご注意ください。

I like reading Andrej Karpathy and watching his talks, especially his lectures for Stanford’s course on Convolutional Neural Networks. His articles on doing research are insightful and helpful too. After Andrej got a Ph.D. from Stanford and worked some time at OpenAI, Elon Mask invited him as a Director of AI at Tesla to manage the development of Autopilot.

Once he twitted “Gradient descent can write better code than you. I’m sorry”:


and later followed with an article called Software 2.0, where he claims that Neural Networks (NNs) are becoming the next shift in the software development paradigm. He compares them with Software 1.0 (traditional software, e.g., written in Java or C++) and enumerates the domains where a transition is happening fast right now. Such areas include Visual Recognition, Speech Recognition and Synthesis, Machine Translation, etc. He continues by listing the benefits of Software 2.0 (NNs):

  • Constant running time and memory use;
  • Consists of homogeneous operations (matrix multiplication and operation max), so special inexpensive chips can be produced;
  • Agile in terms of requirements: add more layers to increase accuracy, remove some to make execution faster (with a bit worse accuracy);
  • And moreover, it is better than any programmer because it has the power to find an optimal solution for very difficult tasks (such as computer vision or speech recognition).

Of course, he lists disadvantages too:

  • Hard to understand and interpret, why Neural Networks work in a particular way;
  • They fail when come across with the data that wasn’t properly represented in the dataset or can learn hidden biases from the data;
  • Some adversarial examples and attacks can fool Software 2.0.

I decided to write about this topic because recently watched the talk where Andrej advocates further and shows proof of concept.

He explains his point by providing an example for detecting a parked car.

For the Autopilot it is very helpful to know if the cars around are parked or not. As Karpathy suggests, in Software 1.0 the code would consist of many if statements: if the car doesn’t move for several seconds, if it is on the side of the road, if there are parking lanes around, if other cars are standing nearby, etc. And such code is hard to write, test and maintain. But with Neural Networks you can train it with a lot of pictures containing parked cars and not, and it will learn the rules on its own. Makes sense, right?

Of course, when you shift to Neural Nets you have to spend most of the time working with data: preparing, cleaning, labeling, etc., and no doubts it is the most difficult and time-consuming task when it comes to applying machine learning (ML) in the industry. Very importantly, the task you are trying to solve with ML should be intractable or very expensive to solve without it, because this approach will never guarantee 100% accuracy. And you should have good evaluation criteria too.

Although I agree with many things he says, I’m not sure that it is the right thing to call Neural Networks a Software 2.0 in the first place. Neural Network is merely a complex geometric transformation of the input data in a high-dimensional space. It has no reasoning.

Yes, for the most tasks in computer vision it is hard to come up with an algorithm that can correctly classify the image, it is just easier to collect the examples of such cases, label them and train a neural net. But it is just a small subset of a huge set of all possible problems. Although we have superhuman computer vision, we shouldn’t think that we can learn how to write a computer program by showing a lot of examples. As Francois Chollet cleverly explained, if we collect an enormous amount of project specifications and code samples that implement each of them, we still won’t be able to train a NN that can produce code for the project specification that it hasn’t seen before. We need a richer set of components like for and while loops, if statements, data structures, etc., and new methods to search how to combine those components (not with gradient descent, but more likely with genetic algorithms). Current NNs will be a great tool for solving difficult some set of difficult problems.

I suggest you read the original article and watch the video to get a new perspective from a bright mind, but I don’t believe that NNs represent a fundamental shift in Software Development and we should call them Software 2.0. Opposite, we shouldn’t excite around them, because the hype around AI is already very high. We should better explain what it is and lower the expectations if we want a healthy wave of interest and investments in AI, rather than a crackdown that happened during AI winters and on the early stage of the internet. Like the internet has transformed our lives in several dozens of years, so AI will change the way we live too. Deep learning is a big step in that direction, but not a fundamental shift in software development.

  • このエントリーをはてなブックマークに追加

Comments are closed.