Thoughts from MIT EmTech 2017: Day 1

11.08.17

Blog, Community, Labs, News

By Anna Whiteman

I had the incredible opportunity to attend MIT’s annual EmTech, short for Emerging Technologies, Conference this week and, in a sense, the opportunity to take a peek at what the brightest minds in the world are cooking up for our future. The range of topics was super diverse, covering artificial intelligence, deep learning, machine/human interfaces, sustainability, connectivity and social media, business impact and much more. I couldn’t possibly do justice to the depth and potential impact of the ideas put forth over the last two days in a simple post, but some of my main takeaways are sketched out below (here is Day 1, Day 2 in a follow-up post):

Artificial Intelligence

  • Professor Andrew Ng took us to class on the current state of AI and how we can start to visualize a future under self-taught machines. He framed modern AI as analogous to electricity in it’s earliest days in its ability to catalyze revolutionary change in pre-existing systems/industries/hardware. Given that we’re still such early days, 99% of economic value from existing AI systems today is drawn from simple A → B input mapping (e.g. input loan application data, output approval/denial decision). As consumers, we’re most intimately familiar with this input mapping as it pertains to targeted ad-serving (what a waste!). However, as we continue to pour data and inputs into platforms that are now adapted to structuring and understanding this data, we’ll follow a step-change away from input/output (“supervised learning”) towards “unsupervised learning” (machines teaching themselves) and ultimately “reinforcement learning” (machines validating their self-taught truths and continuing to enhance learnings with new data). Here’s where we crossover into deep learning machinery into the great beyond.
  • – Key takeaway here is that existing data-collecting platforms have an inherent edge in the AI driven future. Blue River Technology was recently acquired by John Deere for $305 million. Blue River, simply put, drives machines through farmers’ fields and makes the real time decision about which crops live or die based on their perceived health. A key driver of Blue River’s eye-catching valuation? Their unparalleled data set of images of healthy and unhealthy crops. By cornering this market through data aggregation, Blue River was able to develop elegant systems to economize modern farming and do so without the threat of significant competition. The good news for entrepreneurs is that it’s not necessarily a death sentence if you’re starting from scratch, no pre-existing data set to speak of. The key to driving value in this case is embedding the virtuous circle feedback loop into your business — start cultivating a small data set that lends itself to a specialized product, drive adoption through this specialized solution, collect data from acquired users, improve product from incremental data, and so forth…
  • – There’s interesting implications for all the Product Managers out there as well, so if that’s you, shoot me an email and I’ll fill you in!

Deep Learning

– The sessions on deep learning were a nice complement to the preceding session on AI and prove how far we have to go before computers start to mimic the human mind. Today, deep learning experts are using cortical models of the human brain to replicate thought processes in computers. However, we’re nowhere near “solving” AI from a scientific or engineering perspective as the current models are still illogical and narrow-minded, i.e. they lack the common sense and understanding that makes us distinctly human, the qualities that makes the brain a mind.

  • Kris Hammond had a nice presentation on his work enabling natural language processing through machine learning, trying to codify the human impulse to link data input → fact → inference → understanding → communication/language. As it turns out, all of the processes that our brains work through to produce a spoken thought are actually incredibly complex and logical and all of the intermediary steps in doing so are critical to capture if we want actionable insights from huge data inputs. One cool example of this tech in action: the city of Chicago collects thousands of data points on its beaches and shorelines every day, measuring air quality, temperature, turbidity, wave height, etc… and dumps it all into an impenetrable Excel spreadsheet. Machines can take this data, crunch it through logical reasoning processes, and produce a digestible report on condition changes over time or observed aberrant behavior. Obviously in a world where we are collecting 2.5 zetabytes of data per day (the equivalent of 250,000 Libraries of Congress worth of data every day) this technology could play a huge role in understanding the world around us a bit better.
  • – The Innovators Under 35 in this category were particularly impressive. Ian Goodfellow is a Staff Researcher for Google Brain and invented Generative Adversarial Networks which effectively apply game theory to the training of deep learning networks, enabling algorithms to learn much more efficiently. Two players (our machines) square off, one (the generator) continually tries to trick the other using a small data set, the other in the pursuit of not getting tricked again and again starts to extrapolate and apply common sense to similar enough data inputs, basically conditioning itself to learn more effectively with each play through finding the Nash equilibrium. This type of efficiency is incredibly powerful as people with fewer resources/smaller data sets (i.e., not Google) can still participate in the training of machines for deep learning applications.
  • Yibiao Zhao is the co-founder of isee.ai, which is giving self-driving cars the ability to understand their surroundings using common sense (interpolation vs. imagination). Basically, a car can identify an atypical mass in the road ahead, the shape can either be a plastic bag or a rock. Plastic bags are likely to fly away, rocks are not. Current self-driving cars will swerve out of the way of both, though the plastic bag doesn’t pose a real hazard whereas a swerving car might. Train a car to understand that bags will fly away through imaginative learning, though, and they will start to mimic the understanding/actions of human drivers.
  • Here is one of the most helpful/illuminating reads on the subject of Neuralink/deep learning that I’ve ever come across.

CRISPR

  • – This one is definitely big and worth paying close attention to. CRISPR is short for a series of words that you’ll never remember, but Feng Zhang’s description of the means/ends of CRISPR was exceptionally simplistic and elegant relative to the amount of research effort that has gone into developing CRISPR’s capabilities.
  • – Think of CRISPR as a human genome version of Microsoft Word find/replace function. We mapped the full human genome successfully over ten years ago and know that the human genome consists of over three billion base pairs. For this example, think of each base pair as an individual letter in Word. We know that certain strains of base pairs, or strings of letters, map to known genetic mutations or diseases. So, if we could go into an individual cell, find that specific string of letters, cut it out (while preserving all the strains around that mutated strain), and replace it with a healthy strain, we could theoretically rid the cell of disease. This is what CRISPR has effectively been able to prove out in practice, replacing mutated strains of DNA with healthy synthetic RNA strains. The technology is currently undergoing clinical trials and has shown promise in its ability to address mutations that lead to sickle-cell, pervasive heart conditions, and more. You could even apply CRISPR to agtech, conditioning crops to withstand more threatening weather conditions such as drought or extreme heat. The technology is advancing at breakneck speed and has awesome potential to remediate pain and suffering in affected patients, alleviate strains on the healthcare system more broadly, and create greater food security in a challenging climate environment.
  • – The obvious elephant in the room here (as it usually is with tech) is that in the wrong hands, this technology could be highly destructive. Bioethicists will strongly disagree on the morality of gene editing and I’ll leave it to you to imagine a future in which you can pay your way to preferable genetic coding. Suffice it to say for now, the incredible and well-intentioned team behind CRISPR is focused on advancing the technology alone and staying out of the philosophical debates and I have faith in their good intentions and exceptional talent.