The Code: Cybersecurity Issues Surrounding Automated and Autonomous Vehicle Technology

INTRODUCTION

Two decades ago, cars were practically synonymous with the Big Three auto industry of Michigan. Now the industry has expanded across continents. American automakers are no longer concentrated in Detroit, and California start-ups are now looking to get a piece of the action. However, automotive technology has also drastically changed. No more are there cars with key-turn ignition systems. Now, the driver merely presses his foot on the break and pushes a button. Drivers no longer need to worry about blind spots when switching lanes, and automatic braking has reduced the likelihood of a frazzled moment turning into a tragic collision. 

The automated features implemented on newer models of cars have significantly assisted in “promoting” safe driving. However, these features are not yet sufficient with the highly awaited fully autonomous fleet. Compared to fully autonomous vehicles, automated vehicles have partial automation features. Therefore, the driver must be engaged when driving and monitor his environment. A fully autonomous vehicle, on the other hand, is capable of performing all of the driving tasks that lead to the elimination of the driver’s responsibilities.[1] Both kinds of vehicles seem to be highly appealing in removing the likelihood of human error and human-induced hazardous driving; therefore creating a safer and more uniform driving environment. However, both automated vehicles and fully autonomous vehicles may still create system vulnerabilities that have not been a serious concern when many of such features have not been implemented for public use in the past. These vulnerabilities include an increased risk of security breaches and system misconfigurations.

Nevertheless, the goal to ultimately produce fully autonomous vehicles has certainly led to the rapid evolution of automotive technology along with the preparation of the public for the possibility of eventually sharing the road with such cars. Seeing fully autonomous cars is no longer limited to Palo Alto or the hilly streets of San Francisco. Both the University of Michigan and Michigan State University have introduced driverless shuttle services on their respective campuses.[2] Tesla has coined the term autopilot to describe the self-driving capability of its cars, even though the company’s cars are not exactly driverless.[3]

Just imagine the possibilities a driver could experience with the arrival of autonomous vehicles. The assistance drivers will get by “driving” cars equipped with autonomous vehicle technology are so appealing that it will become a pleasure rather than a burden to drive. Indeed, the advantages are highly appealing, but unfortunately, there are concerns over technological vulnerabilities that may arise as autonomous vehicles become more prominent. Eventually, vehicles will become fully autonomous. These cars will use built-in sensors that detect and monitor other vehicles and receive input regarding its surroundings.[4] Moreover, autonomous vehicles will also share data with each other and “learn” from each other. Nevertheless, system outages because of misconfiguration, ransomware attacks, and threats are important risks that need to be addressed. Otherwise, the technology will do more harm to the public than good. Therefore, this essay will address the vulnerabilities arising from potential ransomware attacks and the susceptibility of misconfiguration. 

I. A PREVIEW OF REVOLUTIONARY AUTOMOTIVE TECHNOLOGY

The concept of self-driving cars did not begin with Elon Musk’s vision for Tesla Motors. In fact, the plan to manufacture autonomous vehicles started in the 1920s with a goal to make the cars available to the public during the 1950s.[5] The earliest depiction of autonomous vehicles took place at the 1939 World Fair during General Motors’ sponsored exhibit, Futurama. This futuristic exhibit showed radio-controlled electric vehicles being propelled by the electromagnetic fields generated from the circuits embedded within the road. Futurama showed the public the first prototype for a smart city. General Motors kept its promise and rolled out its Firebird concept cars throughout the 1950s and 1960s declaring it as an “electronic guide system [that] can rush . . . over a [smart] highway while the driver relaxes.”[6] Unfortunately, this vision remained a concept until the twenty-first century when General Motors began introducing the “industry’s first tru[ly] hands-free driver-assistance technology [, thus] allowing drivers to travel hands-free on over 200,000 miles of compatible roads across the United States and Canada.”[7] However, General Motors’ Super Cruise technology is automated rather than fully autonomous, which means that a family road trip will still require a family member to do the driving, rather than playing a game, like in Figure 1. 

Figure 1. A highly popular depiction of autonomous vehicles from 1956 showing how with driverless cars, a family can turn the drive into fun and games.[8]

According to the Society of Automotive Engineers (SAE), there are five levels of automation for cars with “Level 5” featuring cars that have full autonomy. Through full automation, a Level 5 vehicle is capable of performing all of the tasks related to driving under any condition, and the driver is not required to control the vehicle.[9] However, vehicles that fall into this category are still in an extensive research and development phase.[10] As can be seen in Figure 2, automation Level 0, Level 1, and Level 2 require driver interaction, whereas Level 3, Level 4, and Level 5 are considered highly automated vehicles (HAV) by the National Highway and Traffic Safety Administration (NHTSA).

Figure 2. An illustration of the five levels of automation released by the SAE.[11]

Naturally, the technology incorporated into the vehicles in each level of automation vary from level to level.  A car that falls under Level 0 may feature traditional cruise control and warn the driver of an impending collision without any intervention, but it lacks any technology pertaining to automation.[12] Cars under Level 0 fully rely on the driver to dictate every action throughout the drive. The more modern passenger vehicles, like the Kia Stinger GT, are classified as Level 1. Under Level 1, the mobility of the car is still supervised by the driver, but there is at least one advanced driver-assistance feature, like adaptive cruise control or lane-keeping assist technology. Once we approach Level 2, we are faced with technology having more sophisticated driver-assist features. Automotive manufacturers like Tesla and General Motors have referred to their sophisticated driver-assist features hands-free. However, these cars are not fully autonomous and still require the driver to actively monitor his surroundings and be ready to intervene at any time.

The sophistication of the technology featured in Level 2 vehicles has acted as a stepping stone to the technology in HAVs. Multiple technical features include data collection through sensors and the robust software that operates the system and processes the collected data. For example, Tesla’s Autopilot sensor coverage includes eight cameras that provide three-hundred and sixty degrees of visibility around the car with a maximum distance range of two-hundred and fifty meters.[13] General Motors’ Super Cruise technology also operates in a similar manner where “real-time precise positioning cameras” in conjunction with sensors and light detection and ranging systems (LiDAR) map the data collected from the road to assist in driving.[14] Although HAVs are not yet publicly available, they will still require similar sensing and processing capabilities.

For a fully autonomous vehicle to drive itself from one location to another without any driver engagement, this will require the car to make all major driving decisions and carry out all of the basic driving operations, like steering, acceleration, braking, abiding by the speed limit, and avoiding obstacles.[15] Figure 3 represents a basic mock-up of the sensor layout found on the prototype for a typical autonomous vehicle. As explained above, current models with automated features use similar sensor and camera technologies. However, these components still depend on human control when needed.[16] 

Figure 3. Basic view of the sensor technology found in Level 2 vehicles with sophisticated driver-assist features and HAVs.[17]

Figure 3 shows that autonomous vehicles will require camera-based systems, LiDAR systems, radar sensors, and a plethora of other sensor and wireless communication related technologies in order to achieve self-driving functionality.

 Camera-based systems are inexpensive and act as the first-line of sensors in a vehicle’s sensor system.[18] They are analogous to the eyes of human drivers. The images collected by the vehicle’s camera can be used by software based on sophisticated mathematical algorithms to run a comparison analysis with the images found in a “pre-built” database.[19]The information collected from the camera-based systems can be used to determine the location and speed of surrounding objects. Further, by embedding multiple cameras around the vehicle, the car’s sensing system will be capable of “receiv[ing] parallel images of the same object from different angles,” resulting in the vehicle’s software to effectively estimate the distance of the object.[20]

LiDAR systems can also track moving objects, like the cameras; however, LiDAR technology has “been touted as the most important piece of hardware for self-driving cars.”[21] LiDAR is currently used in certain Level 2 vehicles, and research and development teams have incorporated the technology into HAVs as a means to detect the distance of the vehicles to surrounding objects. The distance is determined by the LiDAR system “emitting millions of laser beams every second and measuring how long it takes the laser beams to reflect off [of an] object[].”[22] This feature is what makes LiDAR extremely accurate. 

Sophisticated versions of LiDAR are comprised of multiple laser range finders that use rapidly rotating mirrors in order to generate three-dimensional data point clouds of the vehicle’s surrounding environment. The point clouds are then used to generate a three-dimensional map of the surrounding area, which is then compared to a reference three-dimensional map of the surrounding area created by human test drivers. From the point cloud data and three-dimensional map, the vehicle’s software algorithms will be capable of ascertaining surrounding objects like cyclists, pedestrians, obstructed paths, and other vehicles while ignoring unimportant objects like flying birds.

Figure 3 also shows the use of radar. The radar sensor will emit radio waves to determine the presence of any surrounding objects.[23] Transmitted impulses from the radar will measure the time it takes for the radio waves to reflect off the surface of an object in order to determine the object’s position. Like LiDAR, radar sensors also determine the positions of surrounding objects, but when compared to LiDAR, radar is not as accurate. 

Further, V2X technologies will have a significant role in autonomous vehicles. These technologies have even been of high interest to the federal government. Over the past couple of decades, most of the autonomous vehicle research funded by federal agencies have focused on the development of V2X technologies.[24] V2X is more of an umbrella term that has been increasingly used to refer to both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) capabilities. Through the implementation of V2X technologies, a wireless infrastructure is capable of connecting cars to one another and also connecting cars to transmitters installed with the road; thus, creating another method of data exchange. 

V2X technologies are essentially synonymous with the concept of telematics. Through telematics, data exchanges to and from a moving vehicle becomes possible. Dedicated short-range communication systems (DSRC) and wireless technologies, like Bluetooth and wireless fidelity (Wi-Fi), will aid in making the concept of V2X possible.[25] Naturally, this will fuel concerns regarding the risk of potential security breaches. However, such communication is closely regulated by the Federal Communications Commission (FCC), and permission to use necessary radio frequencies is granted to licensed users; therefore, there are likely no imminent risks at present.

The technology of vehicles with sophisticated driver-assist features—which are typically classified as Level 2 vehicles—and HAVs could be an article within itself. The technology previewed in this section of this article is merely to provide examples of how software-dependent these vehicles are and how vulnerable they can become in the face of ransomware attacks and misconfiguration.  Once Level 5 is achieved, we will certainly be able to experience an enjoyable time on the road, like the family in Figure 1. However, to safely reach this level of comfort and leisure during a commute, the data collected by vehicles, the sensors, and the computer systems tasked with operating these features need to be protected from cyberattacks and misconfiguration by the software developer. Therefore, it is important to address these risks well before the roll-out of HAVs.[26]

II. THE THREAT IMPOSED BY RANSOMWARE AND MISCONFIGURATION ON AUTOMATED AND AUTONOMOUS VEHICLES

Imagine a car pulls up to your pick-up location, and the only thing you must do is open the door, settle in, and prompt the car with your destination. The car takes it from there, and you can relax during your commute. It sounds like an urban legend, or merely an unrealized twentieth century vision. But in 2014, the HBO show Silicon Valley made it a reality.[27] Moreover, the scene also showed how easily a car’s system could be intruded.

In a season one episode, the character Jared is offered a ride home in the eccentric billionaire venture capitalist Peter Gregory’s car. The car turns out to be driverless and operates without issue after Jared verbally prompts the car of his drop-off location. However, half-way through the trip, the car’s computer notifies Jared of a “destination override” and states that the destination has changed from a residential address in Palo Alto to Gregory’s man-made island in the middle of the Pacific Ocean, Arallon. There is no way for the passenger to correct the system, or to even take control of the car, so Jared accepts his fate and arrives at Arallon as its only human inhabitant. Silicon Valley is a satire on the region known to rapidly proliferate technology, so the scene with Jared and the driverless car certainly has some comedic effect. On a more serious note, however, the scene shows that HAVs can be hacked, which is something that could also happen to Level 2 cars with sophisticated driver-assist features.

We live in a time when we are incredibly dependent on our connected devices. Our phones function as digital wallets, password managers, and a plurality of other things. Basically, we carry our lives in a palm-sized computer. Our phones also act as our window to the outside world through the power of Wi-Fi and cellular data. If there was a blackout or a system outage, we would be lost. Indeed, the October Facebook outage did test the strength of our dependency. If something as non-life threatening as the Facebook outage led to loss of income and loss of communication around the world, then what are the implications that the public will face if our cars are susceptible to hacking or there is a system outage attributed to misconfiguration?

Newer models of cars are becoming connected—by introducing more automated features—giving the impression that driver responsibilities are minimized. Eventually, these cars are going to evolve, and the road will solely become occupied by fully autonomous vehicles. However, system outages because of ransomware attacks and misconfiguration will lead to life-threatening outcomes. Therefore, we need to understand scenarios of how automated and autonomous vehicles will be vulnerable towards breaches and how government regulations and the American legal system can catch-up. 

A. Scenario One: Misconfiguration and Sharing the Road

On the road, we see cars ranging from the nineties to the newest make year. Occasionally, we might see a car or two from more than thirty years ago. The capability of each car is different, but we share the road in the most courteous way possible. Now, with the prevalence of cars equipped with automated features, like Tesla Autopilot or General Motors’ Super Cruise, additional variables have been added as we try to safely share the road.

Both Autopilot and Super Cruise use sensors that are controlled through internal software algorithms. By rapidly collecting data through camera-based systems, LiDAR, and other sensors embedded on the vehicle, the system’s software algorithm collects and processes the data picked up by these sensors. If the car is running on outdated software or the system is not keeping up with security patches, then it is possible for an outage to occur with the automated features of the car. Furthermore, such misconfigurations can create vulnerabilities that can be exploited by threat actors.

Misconfiguration is a security vulnerability that can take place internally and be remedied internally. Vehicle manufacturers need to assure that their drivers are not running outdated versions of their system’s software either by continuously reminding the driver or through automatic updates. Further, manufacturers need to assure that their system’s security patches are maintained and sufficiently functioning, which can be achieved through regular testing of the system used by vehicles.

At Level 2, it is likely that a system outage because of misconfiguration will not prevent the driver from using his car since the car could still function without the use of the sophisticated driver-assist technology that makes it classified as Level 2. Therefore, misconfiguration may not have a hazardous impact on other cars sharing the road. However, in the future when HAVs join the road, there will likely be repercussions. 

B. Scenario Two: Targeting the Vehicle Manufacturer

Although marketed as driverless vehicles, the current “driverless” cars are merely at Level 2 on the automation scale. A driver is permitted to take his hands off, but when alerted by the system to take control of the steering wheel, the driver must engage and take control of the car.[28] What if a car manufacturer that produces this kind of Level 2 vehicle is the target of a ransomware attack? This could create the potential for a life-threatening outcome. However, before we examine a potential outcome to this scenario, let us first look at a real, life-threatening ransomware event.

In Alabama, a hospital was the victim of a chaotic ransomware attack that resulted in the incorrect procedure for delivering a baby, who eventually died nine months after birth.[29] Because the attackers had “wiped” access to all digital patient files, the staff recorded all patient data by hand, and the central monitoring system connected to each delivery room was no longer functioning. Inability to access the central monitoring system made it harder for staff to continuously monitor patients in real-time. If there had been no attack or if the hospital administration had not refused to pay the ransom, then the physician would have had enough information to know that the baby was in distress and needed to be delivered through a cesarean section procedure rather than natural birth, which was life-threatening under the circumstances. The hospital denied any wrongdoing and “released” itself from any liability by placing responsibility on the medical staff. However, if the wrongful death lawsuit brought by the baby’s mother can be proved in court, then the death will be “the first confirmed death from a ransomware attack.”[30]

Prior to the COVID-19 pandemic, hospitals were frequent targets of ransomware attacks with attackers “betting that [hospital] executives [would] pay quickly to restore [their facilities] lifesaving technology.”[31] Now with more “driverless” features sprouting on newer cars, it is likely that the manufacturers of these vehicles will catch the eye of ransomware attackers. Reverting to our second scenario, what will happen when automated cars are taken “hostage” during a ransomware attack? If there is a harm, does the responsibility shift away from the manufacturer?

Drivers may not be aware that the manufacturer of their cars have been the target of a ransomware attack and that they are driving deep in the midst of such an attack. If the driver has activated the car’s driverless feature, like Autopilot or Super Cruise, then it is possible that the sensors will not “see” the road conditions and appropriately prompt the driver to engage. Such a failure can result in a tragic collision. 

At Level 2, collisions could be mitigated with the driver taking control of the steering wheel or braking, since such a function can override the “driverless” feature of a Level 2 car.[32] Moreover, the ability to override the car’s automated features could potentially prevent the ransomware attack from turning deadly. However, the driver must recognize that by driving a Level 2 car, he still has a duty to monitor his surroundings.

Further, if an accident does result from the attack, will the manufacturer be absolved from any wrongdoing, as claimed by the Alabama hospital? Not really. Even if the company decided not to pay the ransom, it has a responsibility to regularly test and maintain its systems. Therefore, a failure to do such would create a breach of this responsibility. 

C. Scenario Three: The Jared Experience

Scenario one and scenario two illustrated how misconfiguration and ransomware attacks can be a threat to vehicles classified under Level 2. However, in each scenario, it was also discussed how the requirement for the driver to monitor his surroundings can potentially prevent significant harm resulting from these security vulnerabilities. But what about Jared

Jared’s experience with a Level 5, fully automated vehicle was introduced earlier. We learned that what started out as a routine trip home ended with Jared trying to adjust to his surroundings in the middle of the Pacific Ocean. Naturally, the writers of Silicon Valley were going for comedic effect, but the scene accurately represents potential intrusions into a fully autonomous car.

Once at Level 5, the driver of the vehicle is no longer required for any driving and the car operates under every possible condition and performs all of the necessary driving tasks. Therefore, we must ensure that these vehicles will not be susceptible to any form of security vulnerability. The Jared example was likely more of a system intrusion related to misconfiguration rather than a ransomware attack. This shows why system patching is important. There was likely a software flaw that resulted in the override of Jared’s directions to his house. Clearly the system was unable to recognize that the car was en route when it “inputted” the new route for Arallon, which was not prompted by the passenger.

However, the Jared scene from Silicon Valley could easily be applied to a ransomware hypothetical. The sensors that are most vulnerable to a cybersecurity attack will likely be the camera-based sensors and/or the LiDAR. The radar sensors and V2X technologies may also be susceptible, but since their function is regulated by the government, it may be slightly more difficult to infiltrate at present. 

A ransomware attack on a Level 5 driverless vehicle, like Jared’s, may disable the camera-based sensors through a blinding attack. A blinding attack is very critical because it intends to blind the camera either fully or partially by projecting light into the camera’s view.[33] The camera-based sensors are like the eyes of a human driver, so any obstruction of the view can be highly dangerous and may result in an accident. Moreover, a Level 5 vehicle may also be susceptible to LiDAR attacks, like spoofing. LiDAR spoofing creates the possibility that autonomous vehicles can be “fooled” into seeing nonexistent obstacles and respond accordingly.[34] LiDAR computes the distance between the car and its surroundings by emitting the laser beams and measuring the time it takes for the laser beams to bounce off of an object and return to the sensor. If an obstacle is sensed, then the vehicle will appropriately divert it either by braking or by changing lanes. However, if the LiDAR system falls victim to a spoofing attack and falsely sees an obstacle that promotes sudden braking, the attack could lead to a multi-car collision on a congested road.

During a ransomware attack, the vehicle manufacturer will be unable to “fight” the attack by doing a system override since its computers will be “wiped” by the hackers until the ransom is met. Therefore, manufacturers need to understand the risks posed upon a driverless vehicle before “deploying” it to the market. Level 5 vehicles will eliminate the need for driving, so there will likely be no means for a passenger to engage if the car is attacked or when there is a system malfunction. Thus, it is very important that manufacturers do regular systems tests and maintenance to reduce the potential for misconfiguration as well as assess and fix any system vulnerabilities that may be prone to external attacks like ransomware. Any failures to conduct such tasks should not free manufacturers from wrongdoing during an attack.

CONCLUSION

Autonomous vehicle technology is rapidly proliferating, and it is unlikely that vehicle manufacturers will “pause” until the law catches up. The technology is new, but it has the potential to shape the legal field. With Level 2 vehicles we are at a position where cybersecurity risks like misconfiguration and ransomware can be mitigated. However, when we approach Level 5, such risks may produce great harm. Therefore, policies need to be formed that hold vehicle manufacturers accountable for robustly running system tests with white hat hackers—who uncover security vulnerabilities—and maintain necessary system updates before a black hat hacker—who illegally enters the network with malicious intent—intrudes into the system. This needs to be done ex-ante rather than ex-post


A. Buke Hiziroglu (‘22) is a Juris Doctor candidate at Michigan State University College of Law. She is an Articles Editor on the Michigan State Law Review and serves as the Co-Editor on the inaugural volume of the MSLR Forum. In 2014, Buke received her Bachelors of Science in electrical engineering, with a minor in chemistry from Kettering University, and in 2016, she received her Master of Science in biomedical engineering with a concentration in bioelectrics from the University of Michigan. Her primary area of legal interests are in patent law, technology transactions, intellectual property infringement & counterfeiting, cybersecurity & data protection, and administrative law.

[1] See Automated Vehicles for Safety, Nat’l Highway Transp. Safety Admin., https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety (last visited Nov. 30, 2021) (outlining the different levels of automation in automotive technology as created by the Society of Automotive Engineers (SAE International)) [hereinafter Automated Vehicles].

[2] See MCity Driverless Shuttle, MCity, https://mcity.umich.edu/shuttle/ (last visited Dec. 4, 2021); MSU Autonomous Bus, MSU Mobility, https://mobility.msu.edu/events/MSU%20Autonomous%20Bus.html (last visited Dec. 4, 2021).

[3] See Marielle Segarra & Stephanie Hughes, Tesla’s Full Self-Driving Mode is Actually Not Fully Self-Driving, Marketplace: Marketplace Tech(Sep. 30, 2021), https://www.marketplace.org/shows/marketplace-tech/teslas-full-self-driving-mode-is-actually-not-fully-self-driving/.

[4] See Alfred R. Cowger, Jr., Liability Considerations When Autonomous Vehicles Choose the Accident Victim, 19 J. of High Tech. L. 1, 2 (2018). Input data that an autonomous vehicle may receive from its surroundings include, but are not limited to traffic flow and volume, road conditions, and weather conditions. See id.

[5] See Ayse Buke Hiziroglu, Autonomous Vehicles and the Law: How Each Field is Shaping the Other 1 (Amir Khajepour ed., 2020).

[6] See id. at 5.

[7] GM Introduces New Super Cruise Features to 6 Model Year 2022 Vehicles, General Motors: Corp. Newsroom (July 23, 2021), https://media.gm.com/media/us/en/gm/news.detail.html/content/Pages/news/us/en/2021/jul/0723-gm-supercruise.html.

[8] See Jenn U, The Road to Driverless Cars: 1925–2025, Eng’g (July 15, 2016), https://www.engineering.com/story/the-road-to-driverless-cars-1925---2025.

[9] See Automated Vehicles, supra note 1.

[10] See Hiziroglu, supra note 5, at 8–9.

[11] See SAE Levels of Driving Automation™ Refined for Clarity and International Audience, SAE Blog (May 3, 2021), https://www.sae.org/blog/sae-j3016-update.

[12] See Kyle Hyatt & Chris Paukert, Self-Driving Cars: A Level-By-Level Explainer of Autonomous Vehicles, Roadshow (Mar. 29, 2018, 1:13 PM), https://www.cnet.com/roadshow/news/self-driving-car-guide-autonomous-explanation/.

[13] See Future of Driving, Tesla, https://www.tesla.com/autopilot (last visited Dec. 4, 2021).

[14] See, e.g.Super Cruise™ Available Driver Assistance Technology, Chevrolet, https://www.chevrolet.com/electric/super-cruise (last visited Dec. 4, 2021) [hereinafter Super Cruise™].

[15] See Hannah YeeFen Lim, Autonomous Vehicles and the Law: Technology, Algorithms and Ethics 5 (2018).

[16] Automated vehicles currently available on the market are considered Level 2 vehicles, which require the driver to monitor the driving environment and take control of the car when needed. See supra Figure 2; Automated Vehicles, supra note 1. Even manufacturers of these automated vehicles state the existence of a driver’s responsibility on their products’ websites. See, e.g.Super Cruise™, supra note 13 (emphases added) (“Super Cruise does not perform all aspects of driving nor do everything a driver can do. Super Cruise allows the driver to drive hands-free when compatible road driving conditions allow the feature to be available; but the driver still needs to pay close attention to the road. Even while using the Super Cruise drive assistance technology, drivers should always pay attention while driving and not use a hand-held device.”).

[17] See Autonomous Vehicles – Top 10 Global Auto Makers and Their Design and Technology Innovation, Auto Tech Rev. (Jan. 2, 2020), http://autotechreview.com/siemens-automotive-engineering-simulation-and-automation-test-center/technology-and-innovation/autonomous-cars-trucks-and-vehicles-technology-and-innovation-top-10-global-auto-makers-and-future-of-mobility-with-artificial-intelligence.

[18] See Lim, supra note 14, at 7–12.

[19] See id.

[20] See id.

[21] See id. at 9.

[22] See id.

[23] See id. at 11.

[24] See Hiziroglu, supra note 5, at 7–8.

[25] See id. at 35–36.

[26] Optimistic auto manufacturers are projecting 2025 as the year that the first-ever genuinely driverless car will hit showrooms. See Rick Newman, When Self-Driving Cars Are Coming, For Real, Yahoo News (July 21, 2021), https://news.yahoo.com/when-self-driving-cars-are-coming-for-real-164631730.html.

[27] See Silicon Valley: Third Party Insourcing (HBO May 11, 2014).

[28] See Autopilot and Full Self-Driving Capability, Tesla, https://www.tesla.com/support/autopilot (last visited Dec. 10, 2021) [hereinafter Autopilot]. 

[29] See Kevin Poulsen, Robert McMillan & Melanie Evans, A Hospital Hit by Hackers, a Baby in Distress: The Case of the First Alleged Ransomware Death, Wall St. J. (Sept. 20, 2021, 9:36 AM), https://www.wsj.com/articles/ransomware-hackers-hospital-first-alleged-death-11633008116 (discussing how medical staff were unable to provide appropriate, life-saving medical care to a baby in distress because the monitor readouts were incorrect as a result of a ransomware attack in progress).

[30] Id.

[31] Id.

[32] See Autopilot, supra note 27 (explaining how the driver can override Tesla’s Autopilot system by simply taking control over the car).

[33] See Autonomous Vehicles Camera Blinding Attack Detection Using Sequence Modeling and Predictive Analytics, SAE Int’l (Apr. 14, 2020), https://www.sae.org/publications/technical-papers/content/2020-01-0719/.

[34] See Yulong Cao & Z. Morley Mao, Autonomous Vehicles Can be Fooled to ‘See’ Nonexistent Obstacles, GCN (Mar. 6, 2020), https://gcn.com/articles/2020/03/06/lidar-spoofs-autonomous-vehicle-hack.aspx


Any reproduction of the Article, including, but not limited to its publication, posting, or excerption in print, or on the internet, shall give attribution to the Article’s original publication on the online MSLR Forum, using the following method of citation:

“Originally published on Apr. 6, 2022 Mich. St. L. Rev.: MSLR Forum.”

A. Buke Hiziroglu

Buke Hiziroglu (‘22) is a Juris Doctor candidate at Michigan State University College of Law. She is an Articles Editor on the Michigan State Law Review and serves as the Co-Editor on the inaugural volume of the MSLR Forum. In 2014, Buke received her Bachelors of Science in electrical engineering, with a minor in chemistry from Kettering University, and in 2016, she received her Master of Science in biomedical engineering with a concentration in bioelectrics from the University of Michigan. Her primary area of legal interests are in patent law, technology transactions, intellectual property infringement & counterfeiting, cybersecurity & data protection, and administrative law.

Previous
Previous

Rethinking the “Wall of Separation Between Church and State” Through Carson v. Makin

Next
Next

Asian American Inclusion in Legal Academia