Michael Cohen and the Creation of a Deep Learning AI Lie Detector
By Rob Enderle
Mar 4, 2019 5:00 AM PT
Like many of you I was fascinated by the Michael Cohen testimony last week in what was more performance art than fact-finding. It tends to be fascinating to watch disgruntled ex-employees testify, but they often aren't the most reliable witnesses. The personal nature of their termination tends to push them toward exaggeration, and many were fired for legitimate reasons.
However, I'm a tech analyst, and I'm always thinking about how I would make something better. In this case, there were several ways you could define "better" -- more helpful to my own political party, more entertaining (thus holding more viewers), or more likely to drive real change.
I'm about real change, and what would have been helpful to most of us would have been something that told us, with acceptable confidence, two things. The first is obvious: whether he was lying. The second isn't as clear: whether what he said was true.
That may seem like a weird distinction, but I'll explain how deep learning artificial intelligence could perform both tasks with acceptable levels of confidence. I'll close with my product of the week, HoloLens 2 from Microsoft -- an offering that is taking us closer to true magic.
What Is Truth?
We often focus on the wrong thing. In the movie Minority Report, the fictional tool that was used to predict crime used people with precognitive capability to identify a likely crime so the future criminal could be incarcerated without damage being done.
The focus was on incarceration rather than preventing the crime, and that is why the service failed. Had it instead focused on warning both the victim and the future criminal, the problem with it being less than 100 percent accurate would have been mitigated. The actual goal, preventing the crime, would have been more sustainable.
I put this in the context of Cohen's testimony, because the supposed goal is to get the truth. Yet if you watched the hearing, you saw that Republicans were far more focused on discrediting Cohen (and they didn't defend Trump at all -- seeming to suggest Trump may not be defensible).
You would think that in most cases the truth and lying would be aligned, but often they aren't. Many make compellingly arguments that are wrong, but they aren't lying. Their beliefs are out of line with reality.
I'd argue that, generally, knowing if what is being said is true is more important than knowing whether the speaker is lying. A charismatic believer who is also unhinged with respect to the truth can be far more dangerous than a simple liar.
I use Minority Report as an example because one of the powers of a deep learning AI is that it has the potential to be highly predictive -- with increasing accuracy based on timing and the quality and amount of information.
If you have a system that could predict the future with high accuracy, you also have enough information to make the two important determinations I mentioned: whether what is being said is true; and whether the speaker is lying.
In theory, we should care about the truth more, but during the Michael Cohen testimony the Democrats focused on making the anti-Trump testimony more powerful, and the Republicans focused on arguing Cohen was a liar. Neither side really spent that much time validating the testimony, even though Cohen did supply corroborating documents.
This isn't uncommon at all. In a trial, the experts on your side are believed absolutely while the experts on the other side are believed to be dishonest crooks. The poor judge -- at one time I wanted to be one -- who generally isn't a subject matter expert, then must figure out which expert to believe.
Qualcomm vs. FTC
I attended parts of the Qualcomm FTC trial, and it was clear the FTC expert was unreliable. He was one of those folks who takes a position, then does the work to validate it, and then uses the defense that he is the smartest person in the room and everyone who disagrees with his position is an idiot.
That kind of expert is dangerous. You should start with the evidence and then form your position, not the other way around, or confirmation bias is likely to cause you to reach the wrong conclusion.
The FTC expert testified for the DoJ in a prior trial involving a different case -- AT&T's Time Warner Merger I believe -- and the judge tore into him with a passion, basically saying his "theory" was crap.
This questionable expert was the FTC's pivotal witness. A strong argument can be made that the FTC wasted a massive amount of money, as did Qualcomm, presenting and then defending against an invalid theory. Had the FTC known that the expert's theory had been discredited, it may have avoided both a likely loss in court and the unnecessary expenditure of resources to prosecute a nonexistent crime.
Deep Learning AI Fix
Deep learning AIs are new. What makes them so incredibly powerful compared to their earlier machine learning counterparts is that they train themselves at computer speeds. Machine learning required humans to teach the machines, but deep learning systems, for the most part, learn independently. Given the right framework, they will churn through massive amounts of information to become ever more capable of making autonomous decisions.
This means they could look at a case like Qualcomm's, for instance, and determine not only if a crime was committed, but also whether it would be worthwhile to prosecute it.
For instance, let's say someone grabs a small child out of traffic, and the police want to know whether the child is safe at home. There might be a case for child endangerment, but if the situation were something like the mother dropping some groceries and the child using the distraction to make a mistake, that would be viewed far differently than if the mother had an attention deficit problem that resulted in the child not being supervised adequately.
The AI would look at the pool of available information on both the child and the parent and, within seconds, provide high quality advice on whether the child should be returned to the parent with a light warning or put into some protective service. The main goal would remain pristine as well -- in this case to protect the child, not to punish the parent.
Even if the recommendation were to act against the parent, the deep learning AI could determine, based on what was known about the parent's personality and history, what remedy would fix the problem. It could be removing the child from the parent's care -- or it could be getting the parent help to better focus on the child's well being.
Michael Cohen's Testimony
With respect to Michael Cohen's testimony, neither side optimized the opportunity presented, because of the lack of focus on truth. The Republicans likely are the most at risk, though, because Cohen did have supporting documents, suggesting what he was saying largely was true. (There were some huge holes, particularly regarding his working at the White House, but in general he was well supported.)
So, if the president is impeached, which seems increasingly likely, videos of legislators pounding on Cohen probably will hurt their re-election chances severely. On the other hand, the Democrats should have played off each other more and built a case for impeachment. (It's ironic that the youngest committee member was the only one to seem to get that memo.) Their goal is to impeach, but they still need to build a compelling, simple case.
Now introduce an AI that could report which parts of Cohen's testimony were backed up -- both by the facts he brought and third-party testimony -- and the parts that weren't. The Republicans then would focus on tearing into the unsupported elements of the testimony, and the Democrats could avoid them.
Both efforts would be more likely to succeed (and look good during subsequent election efforts, regardless of what happened to Trump. Underneath, both efforts would be focused more tightly on the truth. The result should be more truthful testimony overall, because it quickly would become clear that false testimony, at best, would be a waste of time -- and at worst, result in criminal charges and jail time.
In short, there would be an increasing realization that lying would have no upside. The U.S. just went from a president (Obama) who clearly
had issues with the truth, to one who probably can't spell the word. I don't think that is a good trend at all, and I don't think this will end well for Trump or the country -- but that ending is still avoidable.
I think we all would appreciate a little more truth from our leaders. More importantly, we at least want them to know what the truth is. Otherwise, the decisions they make likely will drift toward catastrophically bad way too often. They, and we, need a reliably accurate detector of the truth. We also need to know which of our leaders simply are unable to see the truth,
regardless of who presents it.
Wrapping Up: Deep Learning Lie Detector
I think we are on the cusp of creating a deep learning lie detector -- a tool that in real time could, with increasing accuracy, tell us not only whether the person speaking is lying but also whether the person was conveying real facts vs. unsupported beliefs or delusions. This last part is important, because we have climate change deniers and vaccination unbelievers who are on a path to making humans extinct. Some of these folks are in positions of power, or will be.
With this technology we could make fake news obsolete, eventually. That alone would be a good thing.
I was a big fan of the original HoloLens. Developed with the Lawrence Livermore Laboratory, it moved from huge science experiment to become a nicely designed offering that looked like something Porsche's design group would create. It was sleek, self-contained and impressive, for something that arguably wasn't out of beta test yet.
Microsoft HoloLens 2 pulled back from the design-forward concept into something far more consistent with a commercial product. Improvements are targeted largely at removing the complaints from the first generation.
It has twice the viewable area, and it is better balanced, putting less strain on your neck. You can raise the visor rather than having to take it off. It is easier to fit; it authenticates the user with biometrics; it does eye tracking better; it generally will be less expensive (US$3,500); and it is surrounded by a far richer set of tools, helping firms create content and put the device into service.
One huge change is the ability to use your hands as hands, and simply grasp virtual objects in order to interact with them. (I'm guessing haptic gloves will be a future accessory.) Or, put simply, it is out of beta and now it is ready to deploy -- and viable.
At some future point, we'll be able to change dynamically how we see the world around us, and we'll likely look back at HoloLens as part of the critical path we took to getting there. To me, HoloLens -- and the technology it represents -- is the closest thing to a path to real magic. Therefore, HoloLens 2 is my product of the week.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.
Rob Enderle has been an ECT News Network columnist since 2003. His areas of interest include AI, autonomous driving, drones, personal technology, emerging technology, regulation, litigation, M&E, and technology in politics. He has an MBA in human resources, marketing and computer science. He is also a certified management accountant. Enderle currently is president and principal analyst of the Enderle Group, a consultancy that serves the technology industry. He formerly served as a senior research fellow at Giga Information Group and Forrester.