As the Department of Veterans Affairs moves closer to a privatized model, veterans are wondering how the agency is protecting veterans against surveillance capitalism.
Under the guise of benevolence, Big Data giants like Apple, Facebook, and Google are courting VA executives for expansive reach into veterans’ health information and other forms of informatics – – areas previously protected out of privacy concerns.
On its own, many of you may think this is fine. After all, who does not like using an iPhone or the search capabilities of Google?
If this describes you, then reading the rest of this article is likely a waste of your time. You will likely not be concerned about surveillance capitalism and how companies like Apple and Google extract and leverage residual data for uses that the initial user was unaware.
If you are concerned about how Apple or Google is planning to leverage access to your health information, then keep reading.
So what is surveillance capitalism? And, why am I writing about this on New Years Eve 2019?
I came across the above document on YouTube last night [psst, it’s very interesting so please take some time to watch it.] and was astonished. While I consider myself somewhat well informed about Big Data, I was disappointed to see how the industry appears to be duping the public and Congress without a penalty substantive enough to stop the behavior described in the video.
Born out of the dot com bubble, “surveillance capitalism” is the residual data market Google discovered around 2001-03 to help improve the company’s bottom line after the bubble burst. There, “residual data” was used and is used to create targeted predictive technologies for marketing companies, organizations, and government agencies.
This is basically the very lucrative practice of scraping metadata off apps and other software tools to then sell that metadata on top of selling the device or software.
RELATED: Google Shuts Off Adwords Spending
The metadata gleaned by manufacturers and cell companies from a smartphone is very valuable to marketers and government agencies.
Why does this matter related to the Department of Veterans Affairs?
Ever since the Mar-a-Lago trio meetings first surfaced in 2018, thanks to ProPublica, the public was educated quickly on how private interests including Apple were attempting to court VA executives to use private-sector apps, software, and solutions related to Veterans Health Administration health information.
RELATED: The Shadow Rulers Of The VA
At stake was access to more than 20 years of health information stored within VistA belonging to over 20 million veterans, both dead and alive.
Jump forward to the end of 2019. Apple succeeded in its push to convince VA to use its apps for a variety of purposes including to allow veterans to download their electronic health records onto an iPhone.
Professor Of Surveillance Capitalism
In the video at the top of this article, Harvard professor Shoshana Zuboff provides a deep look into the world of surveillance capitalism. Professor Zuboff explains why we should all be troubled with how Big Data elites are essentially conning us.
Professor Zuboff says they want our private information, and they boil the proverbial frog slowly by withholding precisely how the data is used and how it is scraped from the data we provide. Once they get caught doing something illegal or unethical, they issue an apology and divert the conversation to distract the public.
I bring this up here as VA is now rolling out various apps created by Apple to transmit your health information from a supposed secure server to your iPhone. This process was supposedly done for benevolent purposes.
RELATED: VA Health Records Now On iPhone
But as Zuboff explains, nothing is as it seems.
Take the Android phone.
The Case Of Android
Many Google executives were excited to create a competitor to the iPhone and charge a competitive price. Instead, others familiar with how to provide off residual data decided to move the phone’s marketing in a different direction.
Rather than create a highly profitable phone like iPhone the company would instead subsidize the Android to make it as cheap as possible or even free.
Profits In Residual Data
The company stood to make even more money marketing the residual data gathered by selling the information to vendors who could then help yet other companies sell products to consumers at precisely the right time.
As a parallel, there is the infiltration of Google Chromebooks into every corner of child education despite mounting evidence demonstrating the use of these devices by children is bad for cognitive development.
Ever wonder why your kids’ schools are pushing Google computers and the use of Google apps for education? Is it because Google is benevolent as many school boards have been told? Ever ask what Google is doing with that metadata?
Our local school in Minnesota just mandated the use of Chromebooks by all students this year with limited exceptions.
Making Money On Veterans Genomic Data
You may also recall a story I first exposed here about a company called Flow Health. The company had a concerning plan to profit off the metadata it gleaned from our health information, and VA quickly canceled its agreement with Flow Health after I wrote about it.
The agency allegedly realized the agreement may have legal issues.
I was the only journalist to write about the problem. And that was one company. How many others slide by, are presently accessing our health information, scraping off the metadata under the guise of “helping veterans” to then profit handsomely by exploiting our records.
What Of Cambridge Analytica?
All this may seem fine to many of you, but it takes a different turn when you look at Facebook and the Cambridge Analytica scandal – – this is an obvious example of surveillance capitalism gone awry.
Facebook engineers published government-funded research into ways the company could use subliminal messaging to influence the behavior of people while offline.
The researchers concluded they not only could do it but that the individuals being manipulated would have no idea.
Enter Cambridge Analytica. While much of the news media focuses on Russia’s involvement in the 2016 election, they seemingly gloss over England’s roll through Cambridge Analytica in the very real and confirmed illegal behavior just across the Atlantic Ocean.
This company was able to brag about how they could not only detect what information people share but also through residual data ascertain the deep dark secret and fears many people struggle with but do not knowingly express.
By gathering data points from various vendors, companies like Cambridge Analytica were able to manipulate millions during the election cycle.
Google says it can manipulate 10 million voters.
Home DNA Testing Dangers
Recently, the Department of Defense issued a warning against servicemembers using commercial DNA tools like Ancestry and 23andMe because of the risk such data could pose in the wrong hands.
Who would’ve thought such seemingly innocuous objectives could be lumped into this discussion of surveillance capitalism?
The DOD memo read in part:
“Exposing sensitive genetic information to outside parties poses personal and operational risks to Service members,” reads the memo signed by Joseph Kernan, the undersecretary of Defense for intelligence, and James Stewart, the assistant secretary of Defense for manpower.
“These [direct-to-consumer] genetic tests are largely unregulated and could expose personal and genetic information, and potentially create unintended security consequences and increased risk to the joint force and mission,” the memo continues.
The reality is commercial databases are frequently able to hide behind trade secrets to preclude public exposure as to their methods and the data they maintain on us. They can also be sold to whoever is the highest bidder in many instances.
In the wrong hands, DNA information can be weaponized. DOD likely already has the capability of doing so, and their curious warning is likely the result of some military official having a lightbulb moment while watching an Ancestry commercial.
Speaking of which, how is VA protecting veterans’ genomic data from exploitation?
If Big Data And Healthcare Had A Baby?
Over the horizon is the newest industry partnerships between Big Data and the Healthcare industry with the potential birthing of yet a another new agency called HARPA (I will get into that in a bit.).
Through HIPAA releases for supposed research, these companies can pull in a very powerful dataset to marketing a variety of pharmaceuticals and other healthcare solutions to you based on behavior online.
We already know our new media is highly reliant on funding from pharmaceutical companies resulting in a shameful lack of coverage of problems within that industry.
Can this impact your rights as veterans?
Possibly. And what happens if your information is not properly protected, leaked, hijacked, or otherwise?
The current government position on government privacy violations appears to be they have little obligation or liability. So, they can do what they want with your data.
Proposed HARPA And Red Flag Laws
One group of lobbyists is pushing for President Donald Trump to approve gun rights restrictions based on real-time analysis of datapoints through data gathering from technologies in the home, health information, and wearable devices.
This is the most recent evidence of the danger looming in the merger between surveillance capitalists and the government that is ever willing these days to hand over our health information.
Through the creation of a new agency called HARPA, modeled after DARPA, some gun control advocates seek targetting and profiling of individuals with mental illness using surveillance technology and psychologists to limit access to weapons.
A judge and jury would be replaced by your VA shrink and an Apple Watch.
According to the Washington Post, companies providing services and devices supporting the new initiative would be Google Home, Apple Watches, Fitbit, Amazon Echo, and the usual culprits.
The project is being pushed by an organization supporting advancements in medicine to treat cancer called the Suzanne Wright Foundation. Within its plan SAFE HOME, short for “Stopping Aberrant Fatal Events by Helping Overcome Mental Extremes” the organization called for using Artificial Intelligence to monitor fluctuations in “mental extremes” to curb gun violence.
The Trump Administration is reportedly considering the new agency as of the reporting about it back in August after the El Paso and Dayton shootings.
While some folks think this level of Orwellian fusion within government and its public-private partnerships is a great idea, it certainly poses a great deal of risk without basis in research. It also runs the risk of profiling those with mental illness without due process or evidence that such individuals are more dangerous than others.
DOD Refutes Predictive Tech
The publication Reason highlighted a DOD study from 2012 that refutes the premise of the HARPA plan called SAFE HOME that is important to bring up here.
A 2012 study that the Defense Department commissioned after the 2009 mass shooting at Fort Hood in Texas explains the significance of that fact in an appendix titled “Prediction: Why It Won’t Work.” The appendix observes that “low-base-rate events with high consequence pose a management challenge.” In the case of “targeted violence,” for example, “there may be pre-existing behavior markers that are specifiable.” But “while such markers may be sensitive, they are of low specificity and thus carry the baggage of an unavoidable false alarm rate, which limits feasibility of prediction-intervention strategies.” In other words, even if certain “red flags” are common among mass shooters, almost none of the people who display those signs are bent on murderous violence.
The Defense Department report illustrates the problem with a hypothetical example. “Suppose we actually had a behavioral or biological screening test to identify those who are capable of targeted violent behavior with moderately high accuracy,” the report says. If “a population of 10,000 military personnel…includes ten individuals with extreme violent tendencies, capable of executing an event such as that which occurred at Ft. Hood,” a test that correctly identified eight of those 10 dangerous people would wrongly implicate “1,598 personnel who do not have these violent tendencies.”
That scenario assumes a predictive test that does not actually exist. “We cannot overemphasize that there is no scientific basis for a screening instrument to test for future targeted violent behavior that is anywhere close to being as accurate as the hypothetical example above,” the report says.
In the rush to embrace artificial intelligence, we may be applying these software solutions in inappropriate and dangerous ways based on hysteria rather than science. But we all recognize at this point the incentive for the companies to play along all have roots not in benevolence but in surveillance capitalism.
Ironically, the SAFE HOME model of surveillance using commercial solutions gained momentum with Trump after the suspicious Dayton and El Paso shootings this summer.
Predicting Violence ‘Doesn’t Make Sense’
The Reason author followed up the DOD report excerpt:
“According to a copy of the SAFEHOME proposal,” the Post says, “all subjects involved [in the research] would be volunteers,” and “great care would be taken to ‘protect each individual’s privacy,'” while “‘profiling of any kind must be avoided.'” It is hard to see how profiling can be avoided, since the whole premise of the project is that people who fit a certain psychiatric profile are especially prone to mass murder.
Once the research has been completed, of course, the resulting information would be pretty useless if it could be deployed only against volunteers. So how would that work? Would people with certain psychiatric diagnoses be legally required to carry electronic monitors aimed at detecting “small changes that might foretell violence”? How could such a requirement be reconciled with due process or the Fourth Amendment?
Maybe the requirement would be limited to people who pose an especially high risk of violence. But how would they be identified? Since mental health specialists are notoriously bad at predicting violence, SAFEHOME would have to develop two kinds of tests: one that identifies people who are prone to violence and one that predicts when those people are about to commit a crime. “I would love if some new technology suddenly came along that would help us identify violent risk,” Marisa Randazzo, former chief research psychologist for the U.S. Secret Service, told the Post, “but there’s so many things about this idea of predicting violence that [don’t] make sense.”
So, on the back of hysteria, we have a proposal to fully infiltrate homes using surveillance technology for the purpose of using that technology to curtail an individual’s rights.
Dangers Of Technocracy
That would be the final nail in the coffin as we move into the technocratic age where Americans would be ruled by the technocratic elite President Eisenhower warned us against as the second half of his farewell address.
Does that same lack of liability apply to government contractors providing these services? What if there’s a breach?
Bringing this back to my question about Apple, our health information, privacy, and surveillance capitalism.
In the surveillance capitalist mindset, is the Department of Veterans Affairs anticipating these problems and protecting us against the usurpation of our right to privacy? Do we really own our medical data?
This is not a rhetorical question.
I would love to know what you think about privacy and the use of partnerships with Big Data for the solutions that seem to be good on the surface while little is known about how the residual data will be used, gathered, and stored.