Blog, Community, Labs, NewsBy Anna Whiteman
Day 2 of MIT’s EmTech Conference was packed with fascinating topics like emergent paradigms in social media, future of work, brain/machine interfaces, and more. As a follow up to my coverage of Day 1, here are some highlights and key takeaways from Day 2. Full coverage of both days events can be found on the EmTech site for those interested in going deeper, or get in touch with me directly for my event notes.
- – The conversation around social media mostly centered around the policies and emergent responsibilities of the largest distributed information platforms, i.e. Google and Facebook, rather than trends in social media, i.e. Snap and the outer fringes. Fake news, polarization and radicalization of populations, and implications for global security were particularly germane to the conversation.
- – I was struck by the work that Yasmin Green and her team over at Alphabet’s Jigsaw Project are doing to fight back against state-sponsored actors who are using social media platforms to disseminate false information and harmful propaganda. We’re all too familiar with the prevailing social media narrative around fake news, Russian meddling, and radicalization through the internet. So how are those in control of the platforms that host these information wars working to address the issue? Yasmin and the Jigsaw Project, for one, are heading out to the frontlines of the conflicts, having conversations with potential ISIS recruits to understand what led them down that path, what ultimately turned them away, and where Google might be able to most effectively intervene going forward.
- – As it turns out, Google can pretty systematically detect signals that malicious state-sponsored actors are pushing content through (1) temporal indications around keyword search spikes; (2) anomalous clusters of keyword searches in a given geography; (3) semantic signals. Triaging these indicators, Google can time an intervention by serving countervailing content and attempt to drive potential recruits off a path of radicalization. So many questions boiled up for me after Yasmin’s presentation – how does Google subjectively determine good vs. bad? Should they even be allowed have this power? How do their models work predictively to identify new channels of radicalization? These questions just scratch the surface, there will be so much more to watch coming out of Capitol and Sand Hill in the near term, so pay attention people!
- – A final interesting point that Yasmin made was around comment forums on news sources – bad actors have infected these forums and many news outlets have simply chosen to shut the forums down. The space for discussion on the web is actually contracting despite the fact that we know democracy (at least, as we once knew it) thrives in open and transparent systems. Google’s systems are being trained to understand context around language to root out bad actors from these community discussion boards, towing a fine line between censorship and moderation, but a necessary development if we hope to keep these boards open to all as the internet continually expands. The goal is more healthy robust discussion which we can achieve with sophisticated enough machine language processing tools.
- – We saw a guy play a full live action game on the screen in front of our faces, in real time, using just his thoughts. The technology here basically takes advantage of pre-existing EEG technology to detect brain waves that convey a particular thought or will, transpose this brain activity over to a proprietary software platform, then use that software to control the game. It’s wild, so cool, and just needs to be seen to be believed. If you watch Black Mirror at all, you have some idea where this future of thought-controlled realities is going…
- – We saw a ski hat and a wrap bandage both coated with tiny LCDs that replicate the processes of traditional (and wildly expensive) MRI machines. These wearables can perform continuous or ad-hoc body and brain scans to detect abnormalities leading to early detection of cancer, cardiovascular disease, internal bleeding, neurodegenerative diseases, and so forth. To be honest, this one was way beyond my level of comprehension but I know that it was very rad and if you’re interested in learning more about the tech here’s a simpler explanation.
The Future of Work (and beyond)
- – Reid Hoffman (LinkedIn, Greylock) was as good a speaker as ever to take this topic. While Reid did not touch so much on the future of work as he sees it from his vaunted position, he touched on some of the initiatives that he’s working on to make sure that we’re appropriately adapting modern technologies to industries that may be outside of Silicon Valley’s purview but would shamefully fall behind if we neglect them.
- – One of these initiatives is a collaboration with Pierre Omidyar and The Knight Foundation called the Ethics and Governance of AI Fund, which effectively democratizes AI and attempts to steer both general and specialized AI in the “right,” ethically conscious direction. The idea is that engineers who write code are in possession of a profound power and, left unchecked, could develop subjectively unfair templates which any and all future developments would exacerbate. We should open up the channels through which AI is created today to cultivate diversity across thought, race, religion, age, employment, etc… to ensure that it’s truly an open forum tomorrow. Sounds nice but let’s see how it goes…
There was also some great conversation that I won’t cover here around robots in our homes, mapping social media signals to eerily accurate political models, cybercrime and Lamborghinis. Send me an email at firstname.lastname@example.org if you’d like to get access to my full event notes and feel free to ping me if you’re working on interesting ideas in the realm of any of these sectors. Looking forward to EmTech 2018!