Want to share this?

Don: Good morning. Good afternoon. Welcome to On Your Radar. I’m your host for On Your Radar, Don India, and I have the unique pleasure of speaking with industry experts around critical topics as it pertains to data privacy, corporate compliance, cybersecurity, and artificial intelligence. And today is no different.

Don: I’m extremely excited to introduce our guest today. So let me set the stage for who’s joining us. She has an amazing career. She has a law degree from the University of California, Berkeley. She’s practiced privacy law at some of the top law firms across our country. She was the Chief Privacy Officer for Nintendo, something my kids would absolutely love. The head of data privacy of Apple Americas. She’s the founder of Raising the Digital Future, a learning vehicle for parents raising the first generation of humans in a digital world. And is currently the Chief Privacy Officer at LinkedIn and a board member at RadarFirst. Please welcome Kalinda Raina.

Don: Kalinda, thank you for joining us on On Your Radar.

Kalinda: Thanks so much, Don. It’s great to be here.

Don: It’s good to see you again.

Kalinda: It’s good to see you, as always.

Don: So, before we get into some meaty topics of conversation, I know our audiences would absolutely appreciate learning about you…What was your journey like to allow for you to arrive at where you are as a Chief Privacy Officer at LinkedIn? Tell us more about Kalinda.

Kalinda: Ah, thank you for that question, because it is kind of a funny journey in some ways. Because, if you think back, when I started in data privacy, it really wasn’t a topic that one had a profession attached to it or, two, had as many global level issues that were focused on it. So it has really evolved.

Kalinda: I got interested in 1996, because I was looking for a topic in college to write my college thesis on. And my professor did not think the idea of whether the Internet should be governed by government or business was a manageable topic looking back. I kind of agree. We’re probably in a place today where that one’s still struggling.

Kalinda: So, at the time, he showed me a website called Amazon.com, which looked like an Excel spreadsheet and listed the books that he had purchased and said, you know, what could companies do if they know the kind of books I like to read? And it got me down a path of looking into what could companies do with data. And at the time, one of the things that was happening on the internet were cookies so that you could actually keep things in a shopping cart, which today seems silly to us. But, at the time, there wasn’t really a way to remember what a user had looked at last time they were at the website. So, cookies came along to try and solve that.

Kalinda: And it originally had been launched as an anonymous form of being able to track. So, I wrote my paper on whether cookies could be used as a way for companies to market to individuals without knowing who they were. And over time, for some reason, that topic really captured my imagination. Maybe I read a little bit too much of dystopian futuristic novels like 1984 and others, but I was like our data can control us.

Kalinda: So, I ended up going on to law school. And at the time there weren’t classes on data privacy. So, I kind of made my own path and ended up working at a firm. That also didn’t have a data privacy practice, but I ended up managing between both our litigation and commercial teams to make it happen. So, it’s been a bit of a journey.

Kalinda: I didn’t really know all along that journey whether there’d be a true career attached to it and for a period of time I, I kind of split between hedging my bets as a privacy attorney and being a transactional attorney and happily it ended up that the world needs privacy attorneys.

Don: Who knew? Who knew the world needed privacy attorneys? I think you did back in 1996. You also knew that cookies were something back in ’96.

Kalinda: Better than just dessert.

Don: Yeah, cookies are not just eating. I totally agree with you. Yeah, so you, you are dealing with consent now at a different layer than what you were writing your thesis on back a few years ago.

Kalinda: Oh, yes.

Listen Now: Evolution of Data Privacy

Listen Now

Don: Yeah. So, thank you. And I appreciate that. I know our audiences are always interested in learning more about our guests and how they arrived at where they are today. So, thank you for that. So, let’s really dive into the topic that we discussed that we’d like to talk about today. And that’s the evolution of law as it pertains to data privacy.

Don: And what I’d love for you to be able to do is take your history and the evolution of privacy law and apply it to what are we seeing today in proposed legislation and legislations that have been passed with respect to cybersecurity and artificial intelligence. We’re seeing more and more legislation being proposed, passed, new laws such as the SEC cybersecurity disclosure rules law for cyber.

Don: How are you looking at this from your position in terms of the parallels that you see, or that could happen with respect to the evolution of law as it talks about cyber artificial intelligence and looking back on data privacy.

Kalinda: Yeah, and that’s that’s a great question, Don, and a complicated one. Because when you look back over time, there has been quite an evolution in the area of data privacy laws. As I was mentioning when I first started practicing law back in 2001, there weren’t even enough laws in place to justify an attorney at a firm doing that type of work.

Kalinda: And, you know, we could go pretty far back in time to the 70s when the U. S. had some initial government regulations beginning to see the power of data in relation to government. But if you move forward a bit, part of what has evolved both in the U.S and Europe and now we’re seeing in Asia in terms of data privacy laws, security laws is the fact that there is a power tied to access to data.

Kalinda: And at this point in time, you know, 20 plus years on from the development of a commercial Internet, that data is incredibly important, not only in the way it’s used, but in the way that it’s protected, and its potential, quite frankly, to create new things, which is where AI gets involved. So, if you look in many ways, for those of us in the privacy space, we’ve seen Europe as, you know, a market leader in terms of what has to be done to comply.

Kalinda: What are the standards? What is the limit of what companies can do and often what is the cost of doing that in a way that is not meeting the law’s standards. So, for a period of time, here in the U. S., the most concerning issue had been security. And so, we have 50 different data breach laws in the U.S., not yet a federal consistent law. And what we saw here in the U. S. develop was the things that kind of worried us the most, healthcare data, kids’ data, financial data, and of course from a security standpoint, data breaches, is where we ended up with laws. Europe took a much more comprehensive approach even prior to the GDPR, but especially with the GDPR since that is when U.S. companies really had to start paying attention even if they weren’t located in Europe, to what Europeans thought about data privacy law. And that has been a much more holistic view of setting standards that companies then have to interpret and stand up by. And now we’re seeing in Asia, and I think India is going to be the most interesting region to be looking at as it rises with a data privacy law.

Kalinda: It is really the one region outside of Europe that is truly bringing across a law that could truly require U.S. companies in particular to have to comply and change their practices as a result. And you know, in many ways, what we’ve seen is in the very beginning. These laws were focused on what government’s capabilities were in terms of access and protecting that data.

Kalinda: And as things have developed, what we really see the focus being on is what are, not just tech companies, but any company doing with that data. And that is what is driving, I think, a lot of what we are seeing today in regulations, in fines, in the development of new laws. But, I think going into the future, too, part of the problem that we’ve always had with regulation has been that the lawmakers can’t foresee what the businesses are going to develop and create, which creates a real difficulty because what you end up getting here in the U.S. are laws that respond to the problems as they end up rather than trying to solve it from the beginning stages of setting standards to avoid some of those problems. And I think about that, particularly in AI, because what we what we see there is the technology developing so quickly that it’s really hard to know what a good law would even be that would protect anything 10, 15 years from now, because it’s hard to predict what that world would look like in terms of what the capabilities would be from AI.

And so you see this real tension here in the U.S. is around trying to solve what regulators see in front of them while trying to somehow solve for where we’ll be 10 years down the road, and that’s a really challenging thing to do while trying to preserve the freedom for companies to continue to innovate. And so, Europe has taken the approach of very much clear, we have the AI Act now, we have GDPR applying we also have for some of the larger companies, the Digital Markets Act, and then we have the DSA as well.

Kalinda: All of these laws are trying to place restrictions on, in many ways, what you do with the data you get and how you process it and how you use it. And that may stifle innovation in many ways. And I think that’s the difference between both a U.S. approach and in a European approach, and what we’re going to have to see and watch for, for those of us in this industry is how does that balance get struck between allowing for innovation, both in terms of how we protect ourselves from a security standpoint, but also allowing for AI, at the same time, protecting against some of the dangers that those present.

Don: Wonderful response. Thank you. It elicits a tremendous amount of thoughts going in my head. And I think all of our audience are thinking, wow, I didn’t think that depth could actually be part of the connective fabric of all of these things. But the connective tissue is data.

Don: That’s it. Data connects all of these facets of why these laws are being generated, why it’s very difficult to predict. And if you think about artificial intelligence, you referred to it. There’s really three things I believe you can do with artificial intelligence that you can augment your efficiencies.

Don: You can create efficiencies for an enterprise. You could reduce scope of your overall and allow for you to leverage automation, not just efficiencies, but automation in terms of roles. You might have role repurposing, but where you refer to is the third facet that we don’t know. The true unknown. The true unknown is what companies are creating brand new.

Don: The brand new could happen tomorrow. The brand new could happen in three years. But if you create a law now for something that’s going to happen in three years, you’re going to have to see the change of legislation in order for that to happen. Because if you stifle too early, you absolutely curtail growth.

Don: And that’s a country problem. You can’t curtail growth, because that innovation becomes an issue. And then your overall organizations cannot continue to go into the future. And what will also happen, in my opinion, other countries will allow for that. So, you’re in a competitive race there. Or how do you not stifle, but how do you control the facets of what data can actually be used for harm?

Don: Fascinating opportunity. We could probably go on this for about two hours, I would guess.

Kalinda: We could because, you know, Don, it’s just a perfect example of looking at the way the U. S. developed around the Internet, which was to give a lot of freedom to companies to explore and experience, and Europe took an approach of much more restriction in many ways to protect privacy and other human rights.

Kalinda: And see, this goes back to my original thesis was, you know, who’s going to control here – the government or business and in which way works best? I think it’s still to be determined between Europe and the US, but it is funny. I think the tension we have seen over the development in the past 20 plus years, and that’s where I’m very curious to see where we head, especially with AI and security, because both of those issues are the more restrictions or requirements you put around them.

Kalinda: Maybe the better off the public is, but also it takes away from companies abilities to do certain things, to explore certain areas. And quite frankly, with cybersecurity, the amount of back-end investment it takes to sometimes meet those standards isn’t really reasonable for every size company.

Don: No, it’s not. And quite frankly, if you look at who gets to play by the rules and who doesn’t, it’s unfortunate, but criminals don’t have to play by any rules. So as much infrastructure as you sink in, even if you’re a small institution, you may not be able to curtail the inbound attacks that you’re going to receive from a cyber threat.

Don: And it becomes, it becomes very much way too expensive for these organizations to operate. So it’s a difficult scenario that we find ourselves in for sure.

Don: Anything else that you see in terms of the evolution of these legislations and laws as you, as you look into your crystal ball and you say five years from now, you see what in terms of the merging or potential merging of data, privacy, cyber and artificial intelligence legislation.

Don: Do you see that ever happening?

Kalinda: I think, Don one of the things that is, you know, really challenging between Europe and the U.S. has been the kind of approach of how do you separate out security issues, AI issues, privacy issues, when they all tie back to data. And that is especially true in the way we look at the regulations that have developed in Europe. Because in Europe, they’ve begun to recognize that in some way, and that’s why you have this through line of a lot of commonalities even though each of those laws address different risks, competition, or safety, or ethical aspects of AI, they all have a through line in terms of data usage, similar to what’s set forth in the GDPR.

Kalinda: And I think the challenge we’ll have here in the U.S. is getting that level of clarity, because I will say for companies, one benefit of operating in Europe is you have clarity. You have a little bit more of a sense of how things interact with each other and you can set certain standards. Now, the problem with that clarity is that it often restricts a lot of what you would like to do, and I think the fines we’re seeing coming out of Europe make it significant enough for companies of all sizes to stop and pause before they do things.

Kalinda: And again, getting back to our earlier conversation on, you know, does government help or stifle innovation? That is going to be very interesting to watch in the AI space in Europe because the fact that it is the first region of the world to come up with comprehensive AI legislation that companies will need to follow with significant implications if they don’t, may make it very challenging for a lot of companies to explore in Europe things that they might be able to explore in other parts of the world.

Kalinda: And so it’ll play out to see if that’s to a economic benefit in Europe or not. I think given here in the U.S. we’ve seen that the free market approach very much in terms of how we use data has benefited business, but maybe not the individuals who’ve shared their data. That’s the struggle we now have here in the U.S. to figure out because we’ve seen over the past 20 years plus what happens when organizations can collect data without many restrictions. Use that data to develop new ways of interacting with each other, new ways of purchasing items, new ways of doing just about everything in our lives, which is what AI has the potential to do as well.

Kalinda: Yet the cost to human beings in many of ways has been quite high in the fact of one particular thing that to me matters is how is the next generation being impacted by the technology that we have today, and I think we see that a lot in society and some of the difficulties we’ve seen over the past few years here in the US of what can happen when tech has the freedom to use data in ways that it may not be able to do in other parts of the world.

Don: Wonderful responses. Thank you. And I want to ask our final question to you. As I asked all my guests, as you look forward, Kalinda, what’s on your radar?

Kalinda: Well on my radar, in particular, is trying to manage my 16-year-old, 13-year-old, and 9-year-old’s expectations around access to data and tech. Because we have a strong rule in our house around you don’t get a phone until you’re 14. And my 13-year-old is pushing up against that and asking why it can’t be sooner.

Kalinda: And my daughter, who has turned overnight into a massive Swifty fan, is finding ways to get online that I hadn’t expected. So, I think for all parents out there in this day and age, as I was mentioning, the challenge for all of us in raising an entire generation that has no concept of what it is to wait to buy something and go to a store and receive it and have to even go to the store and not have it.

Kalinda: Entire generation that is used to being able to see instantly what their friends are doing on the weekends rather than waiting until school on Monday to find out. There are just so many aspects of their lives that are more complicated, more interesting as well, and more beneficial because of technology. You know, my kids have tutors that are in, you know, all parts of the world, helping them with different issues.

Kalinda: That’s an awesome thing. At the same time, though, you know, they get a lot of peer pressure from, you know, friends of, have you played this, you know, new game online? Why don’t you have a cell phone? Why don’t you have this? So, it creates a lot of challenges for us as parents trying to navigate this new social world for kids.

Kalinda: So, that’s a big part of what’s on my mind, Don.

Don: I absolutely appreciate that entirely. Kalinda, thank you so much for joining our podcast today. And thank you to everyone listening to the On Your Radar podcast. On Your Radar is made possible by RadarFirst. RadarFirst automates intelligent decisions for your privacy and compliance regulatory obligations.

Don: You can learn more at RadarFirst.com. And if you appreciated what you heard today, please subscribe to our podcast and stay tuned for our next episode coming next month. Thank you.

Enjoying On Your Radar?