Season 3 Episode 1 Jan 13, 2021

Trump De-platformed, Parler Dropped, an ANTIFA Conspiracy, and Virtual CES

Pitch

Turns out, social media networks are private companies and *not* public goods.

Description

In this episode, we talk about the mass indefinite ban of Trump on social media platforms, and AWS and the Google and Apple App Stores dropping Parler in the aftermath of the US Capitol’s siege by Trump supporters. Then we chat with Dave Gershgorn, senior writer at OneZero at Medium, who covers AI and its effects on society, about the conspiracy theory that antifa were embedded in the January 6th attack on the U.S. Capitol building. Finally, we speak with Monica Chin, writer at the Verge, about this year’s virtual Consumer Electronics Show.

Hosts

Saron Yitbarek

Disco - Founder

Saron Yitbarek is the founder of Disco, host of the CodeNewbie podcast, and co-host of the base.cs podcast.

Josh Puetz

Forem - Principal Engineer

Josh Puetz is Principal Software Engineer at Forem.

Guests

Dave Gershgorn

Dave Gershgorn is a senior writer for OneZero, a technology publication from Medium. He covers artificial intelligence and surveillance.

Monica Chin

Monica Chin is a writer for The Verge covering computers. Previously, she has covered consumer tech for Mashable, Tom's Guide, and Business Insider.

Show Notes

Audio file size

84763032

Duration

00:58:52

Transcript

[00:00:10] SY: Welcome to DevNews, the news show for developers by developers, where we cover the latest in the world of tech. I’m Saron Yitbarek, Founder of Disco.

 

[00:00:19] JP: And I’m Josh Puetz, Principal Engineer at Forem.

 

[00:00:21] SY: This week, we’re talking about the mass indefinite ban of Trump on social media platforms and AWS and the Google and Apple app stores dropping Parler in the aftermath of the US Capitol siege by Trump supporters.

 

[00:00:34] JP: Then we chat with Dave Gershgorn, Senior Writer at OneZero at Medium, who covers AI and its effects on society about the conspiracy theory that Antifa were embedded in the January 6th attack on the US Capitol building.

 

[00:00:46] DG: I think that above all being extremely realistic and upfront in terms of the limitation of the technology is crucially important.

 

[00:00:55] SY: Then we’ll speak with Monica Chin, Writer at The Verge, about this year’s virtual Consumer Electronics Show.

 

[00:01:01] MC: That’s the one thing we’ve kind of missed out on was we didn’t really get to get first impressions.

 

[00:01:07] SY: On January 6th, a violent mob of Trump supporters broke into the US Capitol Building to stop the formalization of President-Elect Joe Biden’s electoral victory. The riot caused the death of five people and the FBI is currently arresting those they can identify from footage of the assault on the building. Now you’re probably tired of hearing about it already, probably tired of talking about it. I know we are, but we couldn’t in good conscious not talk about it on the show because of the events that followed and the enormous impact they have on the tech world. Now after this act of domestic terrorism, that was incited by countless and baseless statements by Trump that the election was stolen and that he had won it by a landslide, as well as saying things such as, “If you don’t fight like hell, you’re not going to have a country anymore,” social media companies finally had enough of his rhetoric. In order to not incite further violence, Twitter at first put a 12-hour suspension on Trump’s account, which then after Facebook banned him indefinitely for similar reasons, also turned into an indefinite ban for Twitter. After this decision, Twitter stocks fell by 7%. Before the ban, Trump had 88 million followers and posted more than 56,000 tweets. Then there was a slew of other platforms to ban him indefinitely, as well as Trump-related content and tons of accounts of white supremacists that might incite violence including Reddit, Snapchat, and Pinterest. Even Stripe announced that it will no longer process payments for the Trump campaign. We’ll have a larger list of companies in our show notes. Some are calling this Trump’s social media ban as an attack on First Amendment rights, which is kind of absurd because these are private companies and they can block whoever they want for violating their terms of agreement. But really what I want to get into is why ban Trump now, especially Facebook and Twitter? The two tweets that Twitter ban Trump for are as follows. “The 75 million great American Patriots who voted for me, America First and Make America Great Again, will have a giant voice long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!” Followed by three exclamation points. And this one, “To all of those who have asked, I will not be going into the inauguration on January 20th,” which some might argue, and I would agree, are far from some of his worst and most egregious tweets. So Josh, what do you make of all this? What do you make of all the social media platforms responses to Trump and the recent attack on the Capitol?

 

[00:03:37] JP: Oh!

 

[00:03:38] SY: I think that’s pretty much sums it up.

 

[00:03:39] JP: I mean, remember when we were talking about JavaScript and is Ruby dead?

 

[00:03:46] SY: Yeah. Oh, those good times.

 

[00:03:48] JP: This has been a long time coming, hasn’t it?

 

[00:03:49] SY: Yeah.

 

[00:03:49] JP: I mean, I wish I was shocked as everybody else was with the attacks on the Capitol, both surreal and kind of old hat at this point to be watching history play out in real time in a browser window on my desktop. As far as like Trump finally being banned, I think that’s the biggest feeling I have is finally, finally what took so long. I think for me, reading the explanations from Facebook and reading the explanations from Twitter as to why they decided to ban him now, the reasoning is in the past, he has made comments. He has made illusions to violence, but they’ve always been like before violence happened.

 

[00:04:35] SY: Yes, that is the key. That is the key right there. Yes.

 

[00:04:37] JP: Yes. This case, he did some actions outside, in the real world, mainly holding a rally, telling people to go walk up to the Capitol. Violence then occurred and then he posted tweets in support of that violence.

 

[00:04:51] SY: Yes. Yes. That is a huge, huge differentiator between what’s happening and really the world we’re in now versus where we were before January 6th.

 

[00:05:00] JP: I definitely think there’s also a difference between the scale of the violence he has advocated for in the past or the comments he has made in the past. This time, you can directly tie them to an attack on the US Capitol. That’s a huge deal.

 

[00:05:14] SY: I mean, now I’m just thinking, like, I’m very worried about January 20th. I don’t know what’s going to happen, but I’m very, very concerned about the next two weeks. And I feel like everything on the table now.

 

[00:05:24] JP: January 20th being the inauguration of Biden.

 

[00:05:26] SY: The inauguration. Yes.

 

[00:05:28] JP: Right. A cynical take I’ve heard online that I hadn’t thought about until I read it, I think it was Casey Newton on Platformer, which is a great newsletter about this space, social media, and moderation that you all should go read if you haven’t had a chance, but he pointed out the timing of this in terms of Trump’s tenure in office waning is more than coincidental, like Congress has just certified Joe Biden’s election. So it is happening. Joe Biden is going to be president. The Trump administration is leaving. And with the Trump administration leaving, conveniently there’ll be a new FCC. There’s a new makeup of the House and Senate. There’s going to be new people on the committees that previously were threatening regulation against Twitter and Facebook. And I wonder if these companies now feel that they have a little more room to moderate Trump now that they know he’s going to be leaving power in the next two weeks. I mean, previous to this, you had to wonder, like, if you were Facebook, if you were Twitter, no matter what Trump says, if we ban him, that is going to be exhibit number one in the next congressional hearings about, “Do social media companies have too much power?”

 

[00:06:47] SY: Absolutely. Absolutely. Look, I definitely think that it was an easier decision because there are only two weeks left. I think it would have been a much bigger decision and a much harder decision. And frankly, I just don’t know if they would’ve come to the same conclusion if this had happened at the start of his term. If this was the celebration of a new Trump administration, would they have done the same thing? I don’t know. Let me put it this way. If the violence hadn’t happened, I don’t think they would have banned him. I think it still would have been history in the making, he’s a public figure and needs to be recorded, like all those reasons that they’ve used in the past I think would have still stood and I don’t think they would have deplatformed him, even if he said awful things, if the violence had not happened. But because it did happen and there were only two weeks left, I think it was an easy decision to say, “Well, there’s a chance we could be held responsible for something truly terrible happening and also we’re probably not going to get that much flack for it.” So I think it made it an easier decision.

 

[00:07:50] JP: I’m really curious about the deplatforming of associated accounts. So not just Trump’s account, but his campaign. And we’re seeing Twitter going after QAnon conspiracy theorists, white supremacists, people with the hashtag “stop the steal”. They’re banning them now as well, which I see as kind of a broader stroke against anti-government sediment, anti-government action. I was thinking about how other governments have banned people with antigovernment actions. I guess that’s different because that’s a government taking action to ban their own people versus a third-party company. But one of the criticisms that Twitter and Facebook have received from Europe has been, “Should private companies have this power to effectively muzzle one of our politicians?” I mean, yes, I’m not trying to argue that Twitter should be giving Trump a platform for what he’s saying, inciting violence. Conversely, on the other side, he’s an elected figure and we’re kind of relying on Facebook and Twitter to do what we would hope a functioning Congress would do in our country.

 

[00:09:01] SY: Yeah. I mean, to me, it’s as simple as there are terms and services that you must like abide by. It’s just that simple. You know what I mean? Like it’s not really about who you are, how you got there, what the duty is. To me, all that is completely irrelevant. If you violate the terms and services, then you don’t get to participate on that platform. Maybe that’s a harder decision to come by when someone is an elected official, but ultimately like inciting violence is a clear violation of terms of services and agreements. So I’m missing why there’s even a discussion. You know what I mean? Because those are literally the rules. Those have been the rules since the establishment of any platform. Every platform comes with terms of services and you have to abide by them. It’s just that simple. And it’s not a surprise. It’s not like they said, “Whoa! Whoa! Whoa! Surprise rule that we never told you about before. By the way, we’ve declared this because of something you did.” It wasn’t in response to him. Those rules have been there forever.

 

[00:09:56] JP: Yeah, the thing I struggled with was the rules have been there forever and he’s been getting away with it forever.

 

[00:10:00] SY: That’s true.

 

[00:10:00] JP: That’s the part I find very frustrating. It’s like, “We’re going to enforce the rules for this particular person now.” So what are the side-effects of Twitter banning Trump and starting to crack down on, oh, should I say Trump supporters, QAnon supporters, white supremacy groups, is that they are also leaving Twitter and they’re moving to encrypted messaging platforms like Signal and Telegram. And I suppose you could make an argument that this isn’t great because it makes it harder for law enforcement and other groups to track what they’re doing. For better or for worse like…

 

[00:10:35] SY: That’s a good point. Yeah.

 

[00:10:36] JP: Law enforcement was actually tracking these groups, organizing these attacks on Twitter. Now they didn’t do much about it, but it was there in the open for them to see.

 

[00:10:43] SY: Ooh, that’s a really good question. Just thinking about encrypted messaging apps in general, I’ve always kind of seen them as like the good guys, like that’s how we escape government surveillance and capital surveillance and all these things. I kind of forgot. You could use them to plan bad things too. I was like, “Oh, yeah, I guess the enemy can use it as well.” Yeah. That’s a really tricky one. I mean, I guess it’s more of platform versus tool, if that makes sense. Signal and Telegram are tools for organization, whereas Twitter feels like definitely a planning tool as well, but more of a soapbox, a bullhorn. You know what I mean? It’s the place where you not only plan, but you get rallies and you get mass amounts of people to come and back you. And ultimately, the plan is a failure, unless you get mass amounts of people to be there and show up at the rallies, et cetera, et cetera. So I feel like if I had to kind of pick the mass exodus being a net positive or a net negative, I would say it’s a net positive. If I had to guess, I would say it’s a net positive because I feel like they lose their power to mobilize in large numbers. And I think that it’s the large numbers that gives them the power.

 

[00:11:57] JP: I also think it’s a net positive. I go back to the idea that just because Trump has been banned and there’s this crackdown happening on associated accounts, the sentiment does not go away. The misinformation, the climate is already there in our country, and that doesn’t go away because Trump has been deplatformed. Like you said, taking away the ability of these masses to organize is ultimately a net positive. Getting them out of the general conversation as much as possible is also a net positive.

 

[00:12:23] SY: Yes. Yes. Absolutely. I totally agree.

 

[00:12:27] JP: The other big tech story following the attack on the US Capitol Building was Google and Apple app stores, Amazon web services, and other vendors dropping Parler from its services. Now Parler is a microblogging and social networking platform that launched in 2018 and it’s advertised itself as a free speech and unbiased alternative to Twitter. Its lax moderation policies have made it a preferred platform for those that object to Twitter and Facebook, such as Trump supporters, conservatives, conspiracy theorists, and right-wing extremists. Parler was actually used by Trump supporters in the days leading up to January 6th to help organize the Stop the Steal Rally that led into the riot on the Capitol building. In the aftermath of the attack on the US Capitol, Google and Apple dropped the Parler app, citing violations of their guidelines and lack of content moderation policies, Amazon Web Services, which provided backend cloud services for Parler, dropped the platform as well, stating in an email, “Recently, we’ve seen a steady increase in this violent content on your website, all of which violates our terms of service. It’s clear that Parler does not have an effective process to comply with the AWS terms of service.” Parler has since gone offline and is suing Amazon for dropping them. Parler CEO, John Matze, told Fox in a news interview, “Every vendor from text message services to email providers, to our lawyers have all ditched us too on the same day.” Now Parler is potentially a treasure trove of information that could help the FBI in their search for those that terrorize the Capitol and how it unfolded. And if the surface is offline, it would make it much harder for them to do that. But fear not, a hacker that goes by the name “Crash Override” on Twitter. So they found a web address that Parler used internally and was able to archive all of the posts, video, and images during the January 6th attack.

 

[00:14:06] SY: Wow!

 

[00:14:07] JP: Yeah. Parler’s infrastructure had a number of vulnerabilities that hackers were able to exploit, and this runs the gamut. Everything from allowing access to all data via their API, even messages that they claim were deleted, not rate limiting any of these API requests and not randomizing the IDs of message URLs. Hackers made a crowdsourcing system where multiple people could download scripts to pull this information from Parler before AWS completely shut down the service. As a result, they’ve released a huge amount of information for both researchers and law enforcement officials to use in tracking down those involved with the attack on the Capitol Building.

 

[00:14:44] SY: Well then.

 

[00:14:46] JP: So this is someone else being deplatformed. This is a platform being deplatformed.

 

[00:14:51] SY: Deplatformed, yeah, yeah, yeah. And also it’s interesting, I didn’t realize this, that it’s not just AWS and the app stores, it’s like their email services as well. Everyone is like shutting them down. The whole situation seems to be just like, “No, man, we’re not going to be a part of this,” which I think is very interesting.

 

[00:15:11] JP: Right. I had read in the Twitter thread from Crash Override and the other hackers that were crowdsourcing, the download of the data, they had noted at one point that Twilio, which is a service provider that provides SMS and phone services. Twilio had dropped Parler. And as a result, Parler no longer had the functionality to do codes for verifying email signups, and they just turned that part off. So anybody could just sign up with an email and instantly be verified. So the hackers were able to just use random email addresses to sign up and start downloading data en masse. Crazy!

 

[00:15:44] SY: I’m very proud of the hackers involved. Go Crash Override and their team of crowdsource developers who all ran that script and got all that data. That’s incredible because one of the biggest complaints that I saw on Twitter on January 6th, when we saw the mass number of people just like leave the Capitol, it just felt like they got to go home.

 

[00:16:09] JP: Right.

 

[00:16:09] SY: You know what I mean? Like after all of that, you know that video of that one woman being escorted from the Capitol by the police. I mean, we don’t know if she was actually inside or not. But either way, being very carefully escorted down the stairs, held hands with the police as she left, which is kind of like, “You just go home after this? You’re just home in time for dinner and we’re just going to pretend like nothing happened.” That was the thing that I think maybe piss people off the most was like the seeming lack of consequences immediately after. It was a relief to hear the next day that a lot of people were getting arrested and were being tracked down and added to the No Fly List. It seemed like things were happening and it’s really great to know that hackers got to play a role in that and got to support that movement and help consequences and help justice be served. So as a developer, it’s a proud little moment for us.

 

[00:17:02] JP: This reminds me a lot of when we talk to folks from the COVID Tracking Project last season and they talked about having to step up and fill in the gaps that the government was leaving in terms of tracking COVID infection and rates, making that information available. And this just strikes me as very much the same situation we have, independent citizens and hackers stepping up and doing data collection that our government or law enforcement agencies could have been doing or filling in the gaps for what law enforcement wasn’t doing in the aftermath of this attack. The FBI hasn’t commented, but there’s been a lot of speculation that the FBI is directly using this information. One of the examples brought forward is how fast people were put on No Fly List after this attack. There’s speculation that their names were being pulled out of this data from Parler.

 

[00:17:56] SY: That’s what I heard. Yeah.

 

[00:17:58] JP: I wanted to know what you thought about the idea of infrastructure of service providers deplatforming customers. That’s something that I think is a unique wrinkle on this part of the story in particular and that Parler had passed itself off as the, “Come, chat however you like. We’re not going to moderate you. We’re going to let you do whatever we want.” And now they’re basically complaining to their infrastructure providers that like, “Hey! You can’t just drop us as a customer.” But it seems like that’s exactly what private services could do, right?

 

[00:18:28] SY: What they can do. Yeah. I mean, with these companies being so big and us being so dependent on them, I think it’s really easy to forget that ultimately they are private companies and they have been as politically uninvolved, I guess, is a way to put it for so long because I think it’s in their best interest to have everyone participate, right? If you don’t turn any customers down, then you make more money. So I think it always felt fair and equal and protected because their goals are aligned, right? The more people they have in their platform, the more customers they have, the better off they are. So it feels like it’s almost like our right to be on Amazon and it’s our right to be featured in these stores and et cetera. And this is just a little reminder that actually it’s not and ultimately there are rules that you have to play by and there are things that you have to do and you have to fall in line. And this was a little reminder from private companies to say, “Hey, we’re not going to allow this on our platform and we don’t have to and there’s nothing preventing us from doing so.” Well, it’s just kind of interesting that they’re suing Amazon. I didn’t read what the lawsuit actually said, but I can’t imagine there’s any… Amazon is not obligated to host them. They’re just not. So what could they be suing against?

 

[00:19:54] JP: I read a little of the suit. It basically claims that Amazon shutting down Parler is anti-competitive because it boosts Twitter’s business, which is an interesting point. There’s been a lot of call for Facebook and Twitter to have more competition. I guess there could be an argument made to say, “Well, if you want more competition, infrastructure providers do play a part in providing that.” Yes, Parler could go and physically run their own servers and they could physically run their own network connections to those servers and basically replicate everything AWS did. But that makes the barrier to competition against Facebook and Twitter even higher. On the other hand, it’s a private company. Right now they’re not regulated.

 

[00:20:41] SY: Yeah.

 

[00:20:42] JP: I wonder if this will come up in future potential hearings of these big tech companies. We heard all these calls for regulation against Facebook and Twitter, their moderation policies and who they choose to allow on their platforms was brought up a year ago, last summer in the congressional hearings. If there is another round of congressional hearings, I can very well see Twitter and Facebook saying, “Well, look, if you are to regulate us and tell us who we can and can’t do business with, you could have a situation where we have a group like Parler or we have a group like QAnon and we’re not allowed to moderate them or kick them off our platform because you’re regulating us.”

 

[00:21:21] SY: And I guess the issue I have with that lawsuit, just talking about competition, is Amazon is not saying, “Parler, you are banned from our platform because we don’t like you.” It’s like, “We’ve warned you. We told you.” There are literally steps you can take right now that would, well, at this point it’s probably too late, but there were steps you could take back then where you would have stayed on the platform. From what I read, they were given multiple warnings. I feel like the solution was very easy. The solution was address these problems and you can stay on our platform. You know what I mean? I don’t think they’re anti-Parler as a platform. They’re anti-violence, which I think you should be able to be against. And if you address the violence, then you keep the platform. You know what I mean? It feels like “an easy fix”. If they just played by the rules and I’m sure Amazon would be happy to have them.

 

[00:22:21] JP: A great analogy I had heard in the last couple of days was we swap out QAnon or swap out Trump supporter groups with ISIS. Imagine you have a platform that is favored by ISIS and they’re using it to coordinate attacks. Well, obviously no one is going to come out and be like, “Whoa! No, that censorship, you should have let that platform go around.” No, actually according to US laws, you have a responsibility to stop those groups.

 

[00:22:50] SY: Yeah.

 

[00:22:51] JP: I think a lot of this is treating these groups as domestic terrorism. I was surprised to find out there are really not a lot of laws around domestic terror groups and communication and hosting them online. Now if you had an ISIS group using your platform, it’s an international terror group, they’ve been termed as such, and you have legal obligations as a platform to kick them off and report them to the authorities. But those laws don’t exist for domestic terror groups. I think that’s a huge oversight or weakness in our laws.

 

[00:23:23] SY: That’s a really good point. Absolutely. Coming up next, we speak with Dave Gershgorn, Senior Writer at OneZero at Medium, about a conspiracy theory about the Capitol attack that has its roots in a dubious AI company after this.

 

[MUSIC BREAK]

 

[AD]

 

[00:23:55] SY: RudderStack is the smart customer data pipeline. Easily build pipelines connecting your whole customer data stack, then make them smarter by ingesting and activating enriched data from your warehouse, enabling identity stitching and advanced use cases like lead scoring and in-app personalization. Start building a smarter customer data pipeline today. Sign up free at rudderstack.com.

 

[00:24:17] JP: Are you looking to build a chat for your next big project? Save time building in-chat, voice, and video for mobile and web applications with Sendbird. Get to market faster with Sendbird’s UI kit, pre-built UI components, best-in-class documentation, and support for developers. Join companies like Reddit, Delivery Hero, Yahoo Sports, and Hinge. Start your free trial today at sendbird.com/devnews.

 

[AD END]

 

[00:24:43] SY: Here with us is Dave Gershgorn, Senior Writer at OneZero at Medium, covering artificial intelligence and its impact on society. Thank you so much for joining us.

 

[00:24:52] DG: Of course. Thanks for having me.

 

[00:24:53] SY: So tell us about your career background, covering tech and AI.

 

[00:24:57] DG: So I have been writing about AI for about five years now. When I was an editor at Popular Science, I was covering things like virtual reality and consumer technology and artificial intelligence. And the more that I wrote about it, the more that it seemed like this was something that was definitely going to be the biggest story for the next decade or more. So I started writing more and more and more, and suddenly it became my whole beat, and artificial intelligence turned into something that was very much the realm of academia to sort of the biggest buzzword in tech. So it’s kept me very busy since then. And since Popular Science, I’ve worked at Quartz, the digital news startup, and now I am at Medium’s OneZero, the technology publication that was spun up in 2019.

 

[00:25:47] JP: So one of the reasons we asked you on was to talk to us about some of the conspiracy theories that are going around regarding the riot and attack on the Capitol Building this week. I was wondering if you could talk about the latest conspiracy that Antifa were somehow embedded in the attacks on the Capitol and specifically the claims that these people were identified with facial recognition technology.

 

[00:26:14] DG: So a little bit of background, the night of the attack on the Capitol Building in Washington, the Washington Times published a story that claimed an AI startup called XRVision had actually identified some of the rioters as Antifa or Antifa. I guess this is when we get into the argument of how that word is pronounced. So the Washington Times’ story claimed that facial recognition had been used and they had been passed this information by a retired military officer. But as the story evolved, we found that the company, XRVision, was sort of an unreliable narrator for this. And then the company itself said that the claims of the Washington Times’ story was actually false and the people that they had recognized were actually pro Trump and potential ties to Neo-Nazi supporters. So the story kind of turned on its head as more information came out, but the company itself was sort of a flawed source from the beginning because its CTO had continuously posted anti-Biden conspiracy theories and other kinds of propaganda on a company’s LinkedIn profile. So the story was kind of rotten from the beginning.

 

[00:27:38] SY: So within that, is it that the facial recognition technology was just doing a bad job, like it’s just a crappy algorithm? Or do you feel that it was just a conspiracy and they kind of hand wave the algorithm as a way to kind of legitimize the theory itself?

 

[00:27:59] DG: That’s a really good question and there are two parts to it. The first is that there’s a lot that we don’t know about the algorithm. The company would have to have a database of protestors from previous, either Antifa demonstrations or something that the algorithm would have been trained on to actually identify the people in these photos. There was no evidence that they actually totally have that and that this is like a capability that the company is proficient debt. So I think there’s a lot of missing information as to whether the technological capability of this company is up to par for the task. And I think that the second part is that the company was selling facial recognition as a positive identification of a person, which is not necessarily what the technology is geared towards. If you look at law enforcement use of facial recognition, they very specifically say that this is to generate leads and to investigate further. It would never hold up in court that facial recognition matched someone to an identity and then that’s proof that it’s the person in the photo. So I think that the big lesson here, especially for people developing the technology and especially for people who are selling facial recognition, is that it is a predictive tool, not a tool that necessarily translates to 100% accuracy in the real world.

 

[00:29:29] JP: Can you maybe talk a little bit more about how law enforcement would use facial recognition technology? I think a lot of us, to be honest, see facial recognition technology in movies and television shows and we equate it with a forensic tool like DNA evidence or something like that.

 

[00:29:47] DG: Yeah. So this is actually one of the main things that I cover. Law enforcement, whether it’d be local state or federal, they’re typically using some enterprise-grade biometrics companies that supply them with facial recognition. The big ones are companies that you might’ve heard of like NEC that made consumer electronics and were much bigger in the ’90s. The company behind RealPlayer actually makes a facial recognition technology now.

 

[00:30:14] JP: Whoa!

 

[00:30:15] SY: Interesting.

 

[00:30:16] DG: Yeah. Toshiba, Panasonic, these are some of the big players, but then there are also like biometrics specialty companies, like the French company, IDEMIA, which actually does a lot of the TSA pre facial recognition and things like that. So it’s not Facebook or Google or Amazon or any of the tech giants supplying facial recognition to law enforcement. That’s kind of like the first big misconception. Now the images that law enforcement use are typically mugshots, booking photos. More than half the States in the US allow license photos to be used as facial recognition matches. And there are a bunch of databases like visas and passports that’s used by the State Department and it’s really kind of jurisdictional for state and local departments. There is a caveat that, especially from 2020, where Clearview AI, which has been a huge kind of revolution in the way that law enforcement uses facial recognition, has turned all that on its head and there are more than 2,400 law enforcement agencies that use Clearview AI. And what Clearview AI promises is that we are going to scrape billions of photos from the web and almost anybody that you put into this system or a photo that you grab off of social media or off of CCTV footage we can match and find. So that was kind of seen in the aftermath of the Capitol Building riots where there were local police departments, like one in Alabama, who were just searching for people that they saw pictures of on social media. They popped them into Clearview AI and forwarded it off to the FBI.

 

[00:31:56] SY: So this example of XRVision having a flawed algorithm kind of made me think, “Huh, just because you claim to have a fancy algorithm doesn’t mean that it works.” And just because it’s public or you have a startup based on it or a company selling it doesn’t necessarily mean that it holds up to standards, which made me wonder, are there standards? Is there kind of a body that says, “Hey, this facial recognition tool or whatever type of machine learning algorithm actually works and here’s how good or how legitimate it is”? Does that exist?

 

[00:32:29] DG: So in terms of regulation, no. Pretty much anybody can spin up a TensorFlow model and train it to do facial recognition and then sell it. There’s no barrier to the market, but there are a few industry standards and practices that legitimize companies. For instance, the US government, the National Institute of Standards and Technology actually runs a facial recognition vendor test. So on a recurring basis, they will request companies to send them their facial recognition algorithms and then they’ll run a series of tests, how well does it work from one-to-one matching, to like identify that this person is this person in terms of like building security, right? So if you walk up to a building, they have your face. You’d get your face scanned in and they let you in. There’s also an element that you want to and match. This is like the classic law enforcement use case where they pull a photo off of CCTV and then they match it against the database of all the mugshots that they have, which is like the typical use case of law enforcement. And then they do a series of other tests. Recently they started doing race and gender equity in the results and they actually found that facial recognition algorithms, as a whole, generally perform much worse on people of color and women. And it’s also been found that the algorithms are less effective on children.

 

[00:33:56] SY: Interesting.

 

[00:33:57] DG: And there are also some other indicators that as a reporter I look for on whether a facial recognition company is legitimate, whether they publish academic research, whether they have explained what kind of algorithms they use, what kind of dataset that they use and things like that. So there are a bunch of like kind of soft metrics that legitimize a company as well.

 

[00:34:19] JP: You mentioned the efficacy of facial recognition on different race and gender groups and children. And I’m wondering if you have any thoughts about what the role of developers and people in the tech community should be to help the general public, help business understand the limitations of facial recognition and what some of its ethical implications are.

 

[00:34:42] DG: I think that above all being extremely realistic and upfront in terms of the limitation of the technology is crucially important and the second part of that is working to create more equitable databases and algorithms that minimize disparity as much as possible. So the first part is not to oversell it and the second part is to do the work on fairly compensating people to be a part of datasets and making sure that they understand the privacy and data implications of being in a dataset that’s either open or used to perform facial recognition.

 

[00:35:24] SY: So when it comes to the specific example of a machine learning algorithm that is really flawed and not really doing what it claims to do, what role should developers have? I don’t know what role they did play, but what should they have played when you think about your history, your five years of covering AI and how it affects the world? What is the ideal role that a developer would have played in that situation?

 

[00:35:48] DG: I think that there is no one better to speak up and really question the motives of why this technology is needed and how it can be made better. Developers and the people who are building this technology have an intimate knowledge of its flaws. And if the technology is a fundamental mismatch for the problem that’s trying to be solved, there are alternative ways to verify identity. And if there are disparities that can be overcome or if there are ways that a product can be designed around the shortcomings of the technology, then I think that it is very much the developer’s responsibility to find those solutions.

 

[00:36:36] JP: Just switching gears a little bit. I was curious if you could tell us more maybe about the details of how this kind of technology works. I know you covered it a little bit. In my mind, I have this weird model of like some computer algorithm looks at a face and does a bunch of tests. I don’t really think that’s right. Are these models being trained with machine learning? Is there something else that’s actually happening? Say I was a developer working at one of these companies, how much input would I have into how this technology works?

 

[00:37:08] DG: So the standard facial recognition algorithms that you’ll see being sold today are based on deep learning. Deep learning is basically the slightly more technical term for the general artificial intelligence or deep neural networks and neural networks are kind of a branch term and it’s all very confusing and overlapping. But yes, deep learning and AI and machine learning are used in facial recognition. To kind of give a rundown of how the model is trained, there’s kind of this example that I like where if you take out a pen and a piece of paper and you write down the number two, that’s an example of one training data point. And then right over that number two again, and then right over two again, and then write over it two again, and you’ll see that the form of the number two is still visible, but there are tons of little variations and tons of little edge points and tons of little differences in each example of two that you’ve written. And this is kind of a stand-in for the training process of a machine learning algorithm. The machine learning algorithm is going to try and draw kind of a best fit line on that number two and that learning is basically what’s being applied in the future if it sees somebody write a number two. So how well does that little writing that somebody does conform to its “idea” of number two? So it’s a little test that you can kind of do yourself and train this model. But what the algorithm is doing for facial recognition is very similar, except it’s much, much more complex. Each company that builds facial recognition and sells it has a slightly different way of doing it. Some compute the distance between a person’s facial features like eyes, nose, and mouth, and they’ll try to adjust every image and make sure the pitch and the yaw of the image is just right and then run a model that kind of extracts facial features and tries to match them against facial features in a database while others use a much more hands-off deep learning approach where they will just put the image in and the algorithm determine similarity and return matches. So there’s a multitude of approaches, but they all pretty much stem from this very basic pattern matching machine learning approach.

 

[00:39:39] JP: Okay. So a quick follow-up question then. I think we’ve all read stories about how a lot of these facial recognition technologies only work on white people or only work on men or they work better on white men and they don’t work quite so well of people of different races and genders and maybe mixed race, mixed gender. Why is that?

 

[00:39:57] DG: First, I think it’s really difficult to talk about this topic without mentioning the breakthrough work of Joy Buolamwini and Timnit Gebru and Deborah Raji who wrote this incredible paper called Gender Shades in 2018, that really kind of broke open this entire field of research and how facial recognition especially is susceptible to be much worse on women and people of color. So their work showed that the largest corporate facial recognition algorithms like Amazon and IBM performed up to 30% worse. And the reason was that the datasets simply were not diverse enough and there are other more technical reasons that can be found and explained in the paper. But a big takeaway is that the traditional datasets that you will find for facial recognition are overwhelmingly male and white and the technical tuning of these algorithms just have not been done to overcome those limitations of the dataset. I mean, there’s also I think an element of contrast in facial features, which could be overcome if there was more data that the algorithm was able to see. And this is similar to the racial bias of film when the analog photographs were first being implemented by, I believe, the Kodak Company. Film was not meant for people with darker skin tones. And when people of color had photos taken of them, they would be horrendously underexposed. And that was something that needed to be corrected by the people who made the film. So I think we’re seeing an analog and that the developers of facial recognition are not adequately thinking of the people who the technology is actually going to be used on. And it is a typically white, typically male field of research and development that is not necessarily looking at the entire scope of the technology. There’s also another element to this question, which is gender, and facial recognition algorithms right now are completely in equipped to deal with anything outside of the gender binary, pretty much every single facial recognition algorithm that tries to detect gender classifies it as either male or female. And there’s been almost no investment in doing otherwise in the industry. So for people who are transgender or gender non-binary or who don’t certainly fit into the gender binary as it is commonly known, they’re going to face a lot of problems when trying to be classified by these facial recognition algorithms. That’s not to say that their faces won’t be matched in something like a law enforcement algorithm that isn’t necessarily trying to classify gender primarily, but for some of the facial analytics and things like that, it’s something that the industry just like hasn’t focused on at all.

 

[00:43:05] SY: So we spent a lot of time, I think, on this show and just in general talking about the issues, the ethical issues with AI and machine learning. And I wanted to know in the five years of you covering this area, are there good examples? Are there companies that maybe we should try and emulate people who are doing it the right way?

 

[00:43:22] DG: That’s a really tough question. So my job is to kind of look at the seedy underbelly and figure out who might be doing the wrong thing. So I don’t have a ton of examples of people who are doing really good things. I think it’s a really interesting question of like, “What are the ethical and good uses of artificial intelligence?” It’s really hard to find fault with something like Face ID, which is Apple’s facial recognition, that’s kept locally on the phone is like a huge ease of use boost for the personal device. And it’s something that I use every day and I love. So an implementation of facial recognition like that, which is private, which is easy to use and kind of like makes life better every day, I think that’s like a classic example of the technology being used for good. There’s also some like security, like one of the few non-controversial uses of facial recognition in the military is securing military bases. So another kind of security focused implementation.

 

[00:44:35] SY: Well, thank you so much for joining us today.

 

[00:44:38] DG: Of course. Thanks for having me.

 

[00:44:45] SY: Now for a palate cleanser. Coming up next, we chat with Monica Chin, Writer at The Verge, about this year’s virtual CES experience after this.

 

[MUSIC BREAK]

 

[AD]

 

[00:45:06] SY: Are you looking to build a chat for your next big project? Save time building in-app chat, voice, and video for mobile and web applications with Sendbird. Get to market faster with Sendbird’s UI kit, pre-built UI components, best-in-class documentation, and support for developers. Join companies like Reddit, Delivery Hero, Yahoo Sports, and Hinge. Start your free trial today at sendbird.com/devnews.

 

[00:45:32] RudderStack smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make events streaming easy and their integrations with cloud applications like Salesforce and Zendesk help you go beyond event streaming. With RudderStack, you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at rudderstack.com.

 

[AD END]

 

[00:46:08] SY: Joining us is Monica Chin, Writer at The Verge. Thank you so much for being here.

 

[00:46:12] MC: Absolutely.

 

[00:46:13] SY: So let’s get started by having you tell us a little bit about your career.

 

[00:46:16] MC: I, at The Verge, write about laptops and laptop hardware from components, all that stuff like that, basically everything related to laptops. I review laptops. I write news and I also write about technology and education, both laptops and software as well.

 

[00:46:33] JP: So this year, CES went completely virtual due to of course the COVID-19 panic. Sorry, due to of course the COVID-19 pandemic. Can you describe what CES is usually like and compare it to the virtual version this year?

 

[00:46:48] MC: Yeah. So CES is usually this big convention in Las Vegas. Everyone goes, and basically the entire city is taken over by this massive conference and every big companies there, Dell’s there and Apple’s there and Google’s there and everything. I mean, it’s a really big operation. Google last year had a rollercoaster that had built in the parking lot as one of the convention centers and they had people go and ride this rollercoaster. And there’s a show floor where everyone has a booth and you can go and visit. Samsung rented out a big suite at the Aria last year and they basically set it up as a smart home. It’s this huge thing and everyone goes, everyone gets sick. There are big parties at night, but it was just part of the thing. So obviously they couldn’t have that this year. So instead they had an online thing, all the keynotes and stuff still happened, but they happened virtually. So basically this week I’ve just been watching keynotes happened on my computer. Obviously the negative is you don’t get to be in Vegas and doing fun Vegas things, but it is nice and that you don’t, instead of running around between convention center to convention center, you can just kind of sit on your couch and watch and watch, which is nice.

 

[00:48:04] SY: Yeah. Because I never been to CES, but I’ve seen the articles and the videos and the clips and photos. And it just seems like such a cool place to be because you are right front and center with a lot of like future tech stuff, right? Like demos and things that aren’t quite out yet and you get little previews and stuff. How do you recreate that in a virtual form? Or is it possible to capture kind of the wow factor of some of this cool technology?

 

[00:48:30] MC: Yeah. Every company has done a little bit differently. Mostly what we’ve done is we’ve had briefings over Zoom and we’ve met with the executives and the product managers and everyone where we Zoom or over the phone. And then there have also been virtual press briefings and those are just a bunch of people on a Zoom call and they take us through the details of all the products they’re going to be announcing. In this week, there were public keynotes. So they actually filmed the CEOs and all the presenters giving keynotes, and then they posted them to YouTube or they posted them to their website. So that was what most people did. Some places did like actual virtual showrooms. So Acer was one company that literally created like a virtual showroom that you could walk through and you could click on all the laptops and it would show you the details. So that was kind of cool. Obviously the one thing that they can’t really recreate is being able to actually try the product. Because you can’t really try a product over Zoom. So that’s the one thing we kind of missed out on was we didn’t really get to get first impressions. Samsung actually did an in person thing that I went to and they were checking temperatures at the door and there was a whole questionnaire you had to fill out about whether you were having any symptoms and whether you’d interacted with anyone that had any symptoms of COVID-19 and stuff like that. But for the most part, it’s going to be a surprise what using these products are actually like. So that was sort of the one thing we missed out on.

 

[00:49:59] SY: What have been the biggest letdowns of the experience?

 

[00:50:03] MC: You know, one of the things that you really don’t get to do is interact with other media. One of the cool things about actually going to Vegas, not even really related to the fact that a conference is happening, not only do you get to see all these products, but these people that you work with all year, whether it’s product managers of the companies or the PR managers of the companies, or just other journalists whose reviews you read and who you talk to on Twitter. One of the cool things about going to CES in Las Vegas is that you sort of get a reminder, like, “Oh, these are actual people. They’re not just names on a screen.” That’s not something that you’ve got to have at all this year. There were like virtual, like Zoom, networking things, but it’s definitely not at all the same as everyone being in a big conference center where you run into people and you get to have drinks together and stuff like that.

 

[00:50:52] JP: That leads into my next question. There’s been a lot of talk about what parts of digital experiences for conferences are going to continue and which are going to go away. And I was wondering what your thoughts were on that. Do you think we’re going to see some aspects of online keynotes and online conferences going forward? Or do you expect us to return to the way things were?

 

[00:51:14] MC: I definitely don’t see why they couldn’t keep having keynotes online. I know that most of the big ones prior to this year were already live-streamed to the public. And they just happened to have people in the audience. So that should definitely continue because I think a lot of people obviously don’t have the opportunity to go to Vegas, but are still really interested in seeing the new products. I have a lot of friends who will livestream the keynotes every year. They really look forward to that. So as to whether there actually needs to be people at the keynotes at all, I didn’t think I lost any information or loss anything about the experience by not having people all the keynotes. But I do think there is a certain thing you miss out on when you hear people applauding when these new releases come out or when you hear the audience’s reaction. When you’re watching like an in-person, the big Apple events where they unveiled the iPhone every year, for example, and you just hear the crowd burst into applause when the new iPhone comes out. That’s kind of cool to hear and it kind of reminds you just how many people care about this stuff and how cool people find this stuff.

 

[00:52:20] SY: When you think about CES before it happens, when you’re anticipating it, do you have any kind of expectations, any hopes, anything that you think developers should be working on that you hope to see at the events?

 

[00:52:33] MC: One of the things that I think is cool is ASUS launched a bunch of new ZenBook Duos. So the ZenBook Duo is like a dual screen laptop. So there’s like the main screen and then there’s a secondary screen built into the keyboard deck. That’s called the screen pad. So these aren’t the first ZenBook Duo models, but they’ve made a bunch of updates to them. And one of the things that they stress this year is that they’re really trying to get people to develop for the dual screen form factor. So they said Adobe is putting out some of the Adobe’s software. It’s now like customized for the dual screen format. So there’s specific stuff that goes on the bottom screen and specific stuff that goes on the top screen for content creation work. So that’s one thing I’m kind of excited to see if there’s more of this year. Because when I’ve tried dual spring laptops in the past, it’s kind of like you put Twitter and Discord on the bottom screen and you do all your work on the top screen and it’s nice, but it’s not really like a life changing experience necessarily. But I do think that if more programs are starting to be developed this year that really take advantage of the dual screen format and are optimized for it, I think could be really cool to use and could make parts of life a lot easier.

 

[00:53:50] JP: What are some of the most noteworthy things you’ve seen announced? And I’m particularly interested if there’s you’ve seen anything announced or discussed that would be interesting to software developers and engineers.

 

[00:54:00] MC: I would say that two of the biggest announcements that we’ve seen this week, one is the AMD Ryzen 5000 line processors and the other is NVIDIA’s GeForce RTX 3000 series of graphics cards. These are two really powerful laptop chips. And we’ve seen just a whole dump of gaming laptops come out that incorporate these two chips from Gigabyte, from ASUS, from Origin, from MSI and that includes the dual screen ZenBook that I just mentioned. The 15-inch one will have an RTX 3070.

 

[00:54:34] SY: Very cool. So one of the things that I’ve seen because of COVID is this rush of developers, new startups popping up to all kinds of address this new remote life of trying to create different tools and techniques and platforms to make remote meetings, events, conferences more interesting, more exciting. And I’m wondering, what do you think developers should be working on in terms of making these events and these remote experiences even better for folks? What have you seen that worked and what do you think we should be working on for the future?

 

[00:55:07] MC: I think one thing is definitely just being really clear about when all this stuff is and how we can access it. When you’re in a conference like this and you have all these companies that are just throwing event invitations at you, it can be really easy to stick on your calendar and forget it and be like, “Oh, I have seven keynotes today.” And then when the time comes, you’re not sure if you’re supposed to go to YouTube or you’re supposed to go to their page or you’re supposed to go to a Zoom link or what exactly is happening. Whereas when you have an in-person event, usually you get like an address and you’re like, “Okay, I’m going there.” So I think definitely being clear as early as possible, like how exactly the event is going to happen and what it’s going to look like is a good thing for sure. One thing I really liked about Acer’s, the virtual showroom I mentioned earlier was that it was flexible. So we could sort of go and make sure we saw all the products and got all the news at the best time that worked for us. So I think making sure if you’re not doing a virtual showroom, well, making sure that there’s like a recording of the livestream somewhere or a record of it so that people who weren’t able to make it can go and watch later. I think that’s like one of the benefits of the virtual format too is if you miss one of these virtual talks or virtual events, you can go and just watch it later and make sure that you didn’t miss anything.

 

[00:56:30] JP: I was wondering if you had any sense of when we look back on CES 2021, what’s the general theme or trend if there is one? I’m thinking like last year it was foldable phones. A couple of years ago, it was 3D TVs, 8K televisions. It seems like every year there’s kind of like for better or worse one trend kind of comes out of CES. I don’t know if that’s more of a narrative that’s constructed by the press afterwards. But I wonder if you’re seeing a trend coming out of CES this year.

 

[00:57:03] MC: Yeah. I mean, one big thing with laptops is the 16 by 10 aspect ratio. So the 16 by 9 aspect ratio is like what a lot of laptops have used for a very long time. And nobody really likes it. It’s cramped. It’s not really a super-efficient use of screen space. But this year, a lot of non-gaming laptops are finally moving to 16 by 10, which is a little bit taller than 16 by 9. I think it just gives you a lot more room without really increasing the size of the laptop. So I think that’s really cool. Another thing is that a lot of gaming laptops now have QHD screens and I think part of that is that with this new hardware from NVIDIA and AMD as well as Intel’s Tiger Lake chips, which we don’t really know that much about yet, companies are starting to expect that you’ll be able to run games at playable frame rates at that resolution, which I thought was really cool. But in terms of laptops, there’s definitely a lot of new stuff. And of course, all eyes are on Intel versus AMD when it comes to which processor company is going to be on top this year. So honestly, I think that’s really the big story.

 

[00:58:11] SY: Very good. Thank you so much for joining us.

 

[00:58:13] MC: Yeah, totally.

 

[00:58:24] SY: Thank you for listening to DevNews. This show is produced and mixed by Levi Sharpe. Editorial oversight by Peter Frank, Ben Halpern, and Jess Lee. Our theme music is by Dan Powell. If you have any questions or comments, dial into our Google Voice at +1 (929) 500-1513. Or email us at pod@dev.to. Please rate and subscribe to this show on Apple Podcasts.