We’re Building Computers Wrong (for artificial intelligence)

We’re Building Computers Wrong (for artificial intelligence)

Visit https://brilliant.org/Veritasium/ to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription. Digital computers have served us well for decades, but the rise of artificial intelligence demands a totally new kind of computer: analog.

Thanks to Mike Henry and everyone at Mythic for the analog computing tour! https://www.mythic-ai.com/
Thanks to Dr. Bernd Ulmann, who created The Analog Thing and taught us how to use it. https://the-analog-thing.org
Moore’s Law was filmed at the Computer History Museum in Mountain View, CA.
Welch Labs’ ALVINN video: https://www.youtube.com/watch?v=H0igiP6Hg1k

Crevier, D. (1993). AI: The Tumultuous History Of The Search For Artificial Intelligence. Basic Books. – https://ve42.co/Crevier1993
Valiant, L. (2013). Probably Approximately Correct. HarperCollins. – https://ve42.co/Valiant2013
Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review, 65(6), 386-408. – https://ve42.co/Rosenblatt1958
NEW NAVY DEVICE LEARNS BY DOING; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser (1958). The New York Times, p. 25. – https://ve42.co/NYT1958
Mason, H., Stewart, D., and Gill, B. (1958). Rival. The New Yorker, p. 45. – https://ve42.co/Mason1958
Alvinn driving NavLab footage – https://ve42.co/NavLab
Pomerleau, D. (1989). ALVINN: An Autonomous Land Vehicle In a Neural Network. NeurIPS, (2)1, 305-313. – https://ve42.co/Pomerleau1989
ImageNet website – https://ve42.co/ImageNet
Russakovsky, O., Deng, J. et al. (2015). ImageNet Large Scale Visual Recognition Challenge. – https://ve42.co/ImageNetChallenge
AlexNet Paper: Krizhevsky, A., Sutskever, I., Hinton, G. (2012). ImageNet Classification with Deep Convolutional Neural Networks. NeurIPS, (25)1, 1097-1105. – https://ve42.co/AlexNet
Karpathy, A. (2014). Blog post: What I learned from competing against a ConvNet on ImageNet. – https://ve42.co/Karpathy2014
Fick, D. (2018). Blog post: Mythic @ Hot Chips 2018. – https://ve42.co/MythicBlog
Jin, Y. & Lee, B. (2019). 2.2 Basic operations of flash memory. Advances in Computers, 114, 1-69. – https://ve42.co/Jin2019
Demler, M. (2018). Mythic Multiplies in a Flash. The Microprocessor Report. – https://ve42.co/Demler2018
Aspinity (2021). Blog post: 5 Myths About AnalogML. – https://ve42.co/Aspinity
Wright, L. et al. (2022). Deep physical neural networks trained with backpropagation. Nature, 601, 49–555. – https://ve42.co/Wright2022
Waldrop, M. M. (2016). The chips are down for Moore’s law. Nature, 530, 144–147. – https://ve42.co/Waldrop2016

Special thanks to Patreon supporters: Kelly Snook, TTST, Ross McCawley, Balkrishna Heroor, 65square.com, Chris LaClair, Avi Yashchin, John H. Austin, Jr., OnlineBookClub.org, Dmitry Kuzmichev, Matthew Gonzalez, Eric Sexton, john kiehl, Anton Ragin, Benedikt Heinen, Diffbot, Micah Mangione, MJP, Gnare, Dave Kircher, Burt Humburg, Blake Byers, Dumky, Evgeny Skvortsov, Meekay, Bill Linder, Paul Peijzel, Josh Hibschman, Mac Malkawi, Michael Schneider, jim buckmaster, Juan Benet, Ruslan Khroma, Robert Blum, Richard Sundvall, Lee Redden, Vincent, Stephen Wilcox, Marinus Kuivenhoven, Clayton Greenwell, Michael Krugman, Cy ‘kkm’ K’Nelson, Sam Lutfi, Ron Neal

Written by Derek Muller, Stephen Welch, and Emily Zhang
Filmed by Derek Muller, Petr Lebedev, and Emily Zhang
Animation by Iván Tello, Mike Radjabov, and Stephen Welch
Edited by Derek Muller
Additional video/photos supplied by Getty Images and Pond5
Music from Epidemic Sound
Produced by Derek Muller, Petr Lebedev, and Emily Zhang

You may also like...

30 Responses

  1. 5MadMovieMakers says:

    Hyped for the future of computing. Analog and digital could work together to make some cool stuff

    • Sspectre says:

      @Teru AI is an interesting dilemma to say the least. Something that we can be sure of is that every machine and program has some biases built in that come from the programmer.

      In the case of AI, the biases come from the Data it learns from, how its neurons are wired together and algorithms it uses to learn/adjust its biases. Stuff like: Self-preservation, threat assessment, just general emotions are stuff that don’t happen spontaneously in AI, those must be taught.

      Something else that also needs to be figured out before true AI is a thing is brain plasticity, being able to make or cut connections between neurons. Without it the AI is “stiff”, it can be very good at learning skills it was meant for but it would struggle a lot with learning new things.

      At that point we need to ask, would we even need this brain plasticity for an AI? Would it be practical? If it’s not, then we don’t have much to worry about, but if it is the case then it needs to be handled with care.

      We could still stop it, it is still bound by its physical form, it can’t be copied, uploaded or downloaded since an instance of an AI is just not repeatable. It wouldn’t be an immortal digital ghost, instead something rather similar to us, almost human.

    • JiveDadson says:

      In the 80’s and 90’s I did software for a maker of chip-testing hardware. We could and did test so-called mixed signal devices. The future has been here for a long time.

    • Raymund Hofmann says:

      @M. Riggs What most don’t understand is that at the end there is no digital or analog, it is all physics and technology utilizing it.
      Even “digital” is imprecise, non deterministic, random, it just can be seemingly easy made to show that at incredibly low probability so it is hard to see.
      Seemingly easy, because it is paid for in area, power and lack of speed of the technology.

    • M. Riggs says:

      They do. But the analog part ends in the project of the components.
      We can’t get rid of analog in the development level, it’s a big part of what allows us to push speed up.

    • Raymund Hofmann says:

      @LangweiligxD135 Like intelligence and realization was the end of stupidity and superstition throughout human history?

  2. Aetre19 says:

    My dad’s “Back when I was your age” stories on computing were about how he had to learn on an analog computer, which, according to him, you “had to get up and running, whirring at just the right sound–you had to listen for it–before it would give you a correct calculation. Otherwise, you’d input 100+100 and get, say, 202 for an answer.” he hasn’t been able to remember what make/model that computer was, but i’m curious. any old-school computer geeks out there know what he may have been talking about? Era would have been late 60s or early 70s.

  3. ElectroBOOM says:

    Awesome information!

  4. Robert B says:

    As a guy who helps manufacture flash memory I find this really intriguing: especially because flash memory is continuing to scale via 3-D layering, so there’s a lot of potential, especially if you can build that hardware for multiplication into the chip architecture.

    • Raymund Hofmann says:

      So we should have hope in self driving Uber not killing cyclists at night anymore?

    • Seldom Popup says:

      Flash cells are micron scale while the AI accelerators doing integer operation are built with the latest 4 nm technology. And floating gates have really limit life compared to pure logic circuit.

    • Robert B says:

      @ravener Yeah, but interconnects can be designed around with clever architecture to an extent. It’s still quite interesting.

    • Martiddy - Sama says:

      @Zeus Kabob Depends on what kind of images processing the neural network is doing, if the computer wants to identify a face in a person maybe it doesn’t need to process all pixels once it has processed all the pixels near the face, but in some cases distant pixels can indeed be correlated, like the images from a camera in an autonomous car identifying the white lines of a street, where it could be 99% sure it is a straight line but the corner pixels clearly indicates that is curve line.

    • Zeus Kabob says:

      @ravener With many ML algorithms you can split problems into multiple sub-problems for different networks to handle. I wonder if developing that area of ML would be helpful to make effective analog systems? For an example, in image processing a pixel at the top left of the image has little interaction with a pixel in the bottom right of the image compared to nearby pixels. If you wait to compare them until multiple layers later, it speeds up processing the image and allows for algorithms to become more adept at finding sub-patterns in the image.

  5. funktorial says:

    started watching this channel when I started high school and now that I’m about to get a phd in mathematical logic, I’ve grown an even deeper appreciation for the way this channel covers advanced topics. not dumbed down, just clear and accessible. great stuff! (and this totally nerd-sniped me because i’ve been browsing a few papers on theory of analog computation)

  6. Handsome von Derpinson says:

    This actually helped me a lot to understand how neural networks work in general. For me it was kinda like black magic before. It still is to an extend but to know that moden Neural Networks are kind of more complex multi-layered perceptrons helped a lot.

    • RedRocket4000 says:

      Note human neurons are doing a lot more than shown here and then throw in the chemical computer system we have which probably is based on the same methods as a slime(I think I recall that right) that can do stuff like best route calculations as all animals descend from that lower part of evolutionary tree. Seen this presented on rethink the idea of intelligence but I was going we are descended from that group on the evolutionary tree we probably have a lot of the functions of that slime.

      Our neurons work with the chemical system to do the things humans do.

      If we can actually figure out how the brain works some good genetic engineering might be able to vastly improve it’s function. After all the current human brain is like evolution of computers starting with the most primitive function with a higher level tacked on instead of redesigning the whole thing so top of spinal cord then lower brain then upper in very simplified description.

      Note down in the more primitive part of the brain a very accurate clock system is working but our higher brain cannot access it which if we could we never need to look at the time.

    • Al Addin says:

      @Max Lennon That is the big question, that is still being debated among philosophers, biologists, physicists, A.I. experts and other scientists for centuries. I wished, i knew the secrets.
      Materialists like Sir Roger Penrose speculate, consciousness to be based on some quantum gravity effects in microtubes. But that has been proven wrong by Tegmark’s calculations. Penrose admitted, that consciousness is not a computation.
      Why can’t the consciousness derive from materials, atoms or switches? Imagine you achieved to build a robot, which superseded a human being with all it’s senses by several orders of magnitudes. This is, what we actually are pursuing today by more sophisticated devices like digital and analog image and audio processing sensor chips. But do those networks contain a conscious mind, a watcher, a hearer, an entity? No, you need a consciousness, which actually perceives those signals as qualia. At this point many people come up with science fiction or mere speculations. But if it was that simple, that mystery wouldn’t still exist.

    • Al Addin says:

      @Existenceisillusion To what you and most common people are referring to, when hinting to nature, is actually dealing with the ‘soft problem of consciousness’. We made big progress in that field of materials, sensors, organs and the interactions. Today we know much better, how for example an eye or an ear etc. works. But what i am talking about, is called ‘the hard problem of consciousness’. How can a, material based, body sensor like an eye transmit the processed environment, the images to that entity, which we know as consciousness. It lives and witnesses those impressions, sent by our senses, as qualia. I wished, i could explain it in easier words.
      A quite good introduction to a broader audience about that has been given by David Chalmers on ted.

    • Max Lennon says:

      @Al Addin If consciousness can’t derive from atoms, what’s your theory as to what the human brain is made of?

    • Will says:

      That’s the thing, some techniques used in AI are not fully understood, we just know that it works. This field isn’t like mathematics or physics, it’s a lot more fluid I’d describe it as.

  7. Mike O says:

    This is a fantastic video! I’m an amateur in computing at best, but I love to learn about them. This is the best explanation of neural nets I’ve ever seen, and I’m sure my first Google search about them was 4+ years ago. The amount of work that goes into your videos is apparent, and it just makes me want more of them!

  8. Jeff13mer says:

    Just took EM-Physics and now hearing about how this relates to linear algebra… this is so genius I love it. This is so cool, thank you for making this video.

  9. eXacognition AI says:

    Thanks for highlighting the relationship between AI & analog systems to the world. The connection to Cognituve AI is even stronger as we have found in our Neuromorphic & ERRIS processor work in Cognitive AI. This is because processing human level context of the kind needed for Superintelligence is far more resource intensive than simple classification or NLP. There is a direct gain from designing advanced Cognitive AI using analog systems that digital systems can’t achieve because there are not enough resources on earth to do so. Analog solves this paradox.

  10. Frossida says:

    Incredible video as always. I wish media used in engineering faculties to teach were as brilliantly prepared as Veritasium videos.

    Soulless, static slides on these topics make me feel like I don’t enjoy computer science/engineering as much as I thought I did. When I see one new Veritasium video, my enthusiasm comes back. So thanks a lot.

Leave a Reply

Your email address will not be published.