“It is really all about the incentives” #
Data must be leveraged with caution by the government in order to ensure that citizens are not harmed by a lack of proper oversight. Many programs within the government are overwhelmed with applicants and are seeking to modernize by using more advanced technology at various steps within their process. Although the desire to increase organizational efficiency is laudable, there are risks regarding the use of algorithms to make decisions. As the government becomes more interconnected, there is an increasing risk that AI will be used as the final authority and people will lose control of the processes that they created. This would result in an increased risk of bad decisions that hurt the people that the government was created to serve.
Rise of Artificial Intelligence #
Eli Whitney’s invention of interchangeable parts fundamentally changed the manufacturing process and the industrial revolution increased the speed and scale at which material goods were manufactured. Artificial Intelligence (AI) allows technologists to bring about a similar revolution in the digital environment that will permanently increase the efficiently of digital processes, which will influence the shape of the current workforce.
People can now map out the activities that they take on a recurring basis and then teach AI to take those activities over which frees them up for higher level activities. We do not consciously think about breathing, balancing, or other lower level activities because our bodies perform those activities without us having to consciously choose to complete them. Technology should work for people, allowing us to task out mundane tasks or other activities so that we can focus on the people around us, rather than a computer terminal.
Artificial intelligence is being used to make decisions at an increasing rate as a result of the need to scale business processes so that companies can grow while controlling operational costs. In order for decisions to be made, large volumes of data are needed to create the models which AI utilizes. The data sets are often referred to as Big Data which the National Institute of Standards and Technology defined as an increase in size and complexity which includes structured and unstructured data. In the Big Data Interoperability framework, the NIST stated that, “Big Data refers to the need to parallelize the data handling in data-intensive applications” (NIST, 2018).
The NIST also described 5 attributes of Big Data, known as the 5 V’s. These are volume, variety, velocity, veracity, and value. The NIST provides definitions of these characteristics and they can be used when evaluating the data which is powering underlying models. Understanding the data is vital to ensuring that the methodology being used to make decisions is sound and doesn’t institutionalize bias or other historical problems with the data. In a recent example, Amazon had to end a AI program for hiring that used 10 years of historical data and determined that men were preferable candidates based on their gender (Dastin, 2018).
Programs such as Amazon AI have highlighted the need for transparency when algorithms are being used to make decisions or screen information for humans. If the Department of Agriculture used AI to determine food-stamp eligibility, it is likely that some bias would exist based on whatever data was being used for modeling. Without an independent audit of the methodology and transparency regarding the process, it is likely that the model could harm people by making unjust decisions regarding a critical safety net.
This need for transparency is critical for people to trust the government as decisions are masked by limited releases of information and excessive bureaucracy. The Internal Revenue Service (IRS) had to admit in 2017 that they unfairly targeted conservative groups who were seeking tax-exempt status. There was public outcry when it was discovered that the IRS was targeting organizations based on key words in their names such as “Tea Party” and “Patriots” (Overby, 2017). Regardless of political beliefs, government agencies must remain disengaged from politics in accordance with the Hatch Act and other legislation to prevent abuse of power (OSC, 2019).
AI has proven that in controlled environments, with clear success criteria, algorithms can perform very well. Games, such as Chess and Go have allowed technologists to build specialized AI that have successfully beaten humans at the game. These successes are monumental and they prove that under the right circumstances, AI can be a revolutionary tool for people.
Government Innovation #
The US government is faced with the challenge of innovating on existing business processes without violating the laws which all agencies are required to follow. The complexity of the programs being administered make projecting the downstream impact of changes difficult to project. The government has broad goals - such as ensuring the security of the nation or providing benefits to the veterans - which are admirable but oftentimes success is not clearly defined. This lack of a shared vision of success often causes organizations to use AI in a manner that weaponizes bad data and causes models to make bad decisions at alarming rates.
AI continues to demonstrate the potential to dramatically increase the efficiency of organizations but also has demonstrated the ability to magnify or institutionalize bad errors. Algorithms and AI can also drive the wrong behavior in people by forcing them to change their habits in order to adapt the logic being employed by the algorithm. One example of this was the Veterans Health Administration scandal of 2014 in which algorithms were used to determine performance. The models being used were flawed and encouraged leaders to manipulate numbers in order to ensure that their performance goals were met. This resulted in veterans waiting inordinate lengths of time on secret waiting lists that were tracked outside of the system and therefore excluded from the model. Over 40 veterans died while on the secret waiting list at the Phoenix VA Medical Center and similar issues were reported from other locations throughout the US, according to a investigation led by Senator Tom Coburn, M.D. (Coburn, 2014).
The National Security Administration has been scrutinized for their misuse of data through a broad interpretation of the Patriot Act. Edward Snowden exposed the US Government’s electronic surveillance programs and some questionable ethics regarding data collection. The NSA defended these decisions by arguing that they were committed in interest of nation security. Unfortunately, this is a very vague goal and allows the agency to continue encroaching on liberties. Often the justification for these violations of rights is a negative by stating that terrorist attacks were prevented. However, it is impossible to prove that it was necessary to violate civil liberties in order to ensure that an attack did not occur. If the NSA focused all their resources on defending US interests it is possible that they could thwart the same number of terrorism attacks without breaking any laws or carrying out ethically wrong programs.
The government can struggle to innovate effectively but there are positive examples of using algorithms and AI to increase the services provided. In one example, the Department of Education recognized that they were not staffed to handle the significant volume of Free Application for Federal Student Aid (FAFSA) applications that occur each year. In order to provide timely service, they contracted with a private company to build them an AI that would automate all of the claims in accordance with the business rules provided by the Department of Education. This AI has ensured that over 11 million claims are processed within an average of 3-5 days after receipt (Department of Education, 2018). If students are unhappy with the initial decision, they may appeal decisions but the consistency and quality of decisions continues to prove that this AI deployment is suitable to achieve the automation that is required.
The Veterans Benefits Administration was also successful in deploying automation in a manner that improves their services to veterans. In order to shorten the amount of time it takes to add a dependent to a Veteran’s award, the Rules-Based Processing System (RBPS), was developed to automatically process specific types of claims, provided that certain prerequisites were met (M21-1, 2018). This new system allows veterans to immediately receive benefits for their dependents while the VA can audit as needed to ensure no corrections are needed.
While the dangers of unfettered algorithms remain a concern, The Department of Education and Department of Veterans Affairs have demonstrated that it is possible to implement them in a monitored environment that has a clear success criterion. These projects also follow transparent rules that are published by the agency and the models are tested to ensure that they comply with the established rules. This is a stark contrast to agencies such as the NSA or IRS who have not revealed the precise methodology used by their projects which caused such public outcry.
Pressure to Modernize #
Government agencies are at a high risk of cyber-attacks as a result of the volume of sensitive information that they gather. The Office of Personnel Management (OPM) suffered a catastrophic breach of data and extremely personal information gathered in background checks for people seeking government security clearances, along with records of millions of people’s fingerprints were stolen (Fruhlinger, 2018). The source of this attack was never definitively proven but several cyber-security experts believe that China was to blame.
The US has experienced the highest volume of data breaches out of any country in the world. In 2016, 47.5% of all incidents were within the US and billions of records were exposed (Watson, 2017). While the government accounted for 11.7% of the breaches, they are especially dangerous since cyber-attacks against the government could damage key infrastructure and cause lasting issues to the country.
As the government seeks to implement AI and improve their business processes, it is important that they start their modernization efforts by establishing a sound data governance policy and implement comprehensive cyber-security plans. Data science can offer huge opportunities for government through the ability to process larger and more complex data sets. As a result of these increased insights, it can provide better recommendations for policymakers and make services more tailored to the needs of the citizens (Drew, 2016). The government has a responsibility to be better stewards of the data and ensure that it is ethically used to help people and improve the quality of the services that the government provides.
As the government seeks to modernize, it is important humans are kept in the decision loop, questioning the decisions that AI programs are recommending. People are essential decision makers since they have intuition and general intelligence that allows them to see aspects of problems that AI will miss. Humans can review a complex mathematical problem and will see that the answer doesn’t “feel right”. This perception allows humans to make logical leaps and use their intuition to infer information in a manner that AI cannot replicate. Mikhail Bongard, a Russian researcher, stated that, “perception is far more than the recognition of members of already-established categories — it involves the spontaneous manufacture of new categories at arbitrary levels of abstraction” (Hofstadter, 1995).
While the government faces many challenges to utilizing AI and modernizing IT infrastructure, there are clear benefits if a cautious approach is taken. Government agencies must constantly scrutinize the algorithms that are being used and should seek to implement them in a transparent manner. Tools, such as GitHub, can be used in order to publish the source code and openly track any bugs that exist. The same platform can also be used in order to track enhancement requests. As long as the government is transparent regarding projects and any mistakes that are made, the citizens of the country will be able to provide feedback regarding the methodology that is being used.
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved from: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
Drew, C. (2016). Data science ethics in government. The Royal Society Publishing. Retrieved from: https://royalsocietypublishing.org/doi/full/10.1098/rsta.2016.0119
Fruhlinger, J. (2018). The OPM hack explained: Bad security practices meet China’s Captain America. Retrieved from: https://www.csoonline.com/article/3318238/the-opm-hack-explained-bad-security-practices-meet-chinas-captain-america.html
Hofstadter, D. (1995). On seeing A’s and seeing As. SEHR, volume 4, issue 2: Constructions of the Mind. Retrieved from: https://web.stanford.edu/group/SHR/4-2/text/hofstadter.html
National Institute of Standards and Technology (2018). NIST Big Data Interoperability Framework: Volume 1, Definitions. Retrieved from: https://bigdatawg.nist.gov/_uploadfiles/NIST.SP.1500-1r1.pdf
Office of Special Counsel (2019). Hatch Act: Overview. Retrieved from: https://osc.gov/pages/hatchact.aspx
ONeil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. London: Penguin Books.
Overby, P. (2017). IRS Apologizes For Aggressive Scrutiny Of Conservative Groups. Retrieved from: https://www.npr.org/2017/10/27/560308997/irs-apologizes-for-aggressive-scrutiny-of-conservative-groups
Spinello, R. (2017). Cyberethics. Jones & Bartlett Learning.
Watson, M. (2017). US dominates the world in data breaches. Retrieved from: https://www.itgovernanceusa.com/blog/us-dominates-the-world-in-data-breaches