Technology can aid people, and help to fulfill needs.
(Marketing tells us this.)
Technology can also hurt people, and harm fulfilled needs.
(News and research tells us this.)
Consider Amazon's voice assistant, Alexa, on their Echo devices. It might help you when your hands are full, and you need to purchase more dish soap. However, it might harm you when you expect to have a private conversation in your home with your partner, but find out that your device has recorded it and mailed it to one of your partner's employees. Think this is outrageous? It happened. And this is one of the simpler examples of technology and harm.
Imagine you're a technology creator, you worked on Alexa, and you hear this news. What do you think? How do you feel? How do you react? Some possibilities:
These aren't real quotes from Amazon employees, but I can imagine all of them because I am a technology creator. At different moments I have had each of these thoughts in my head about products that I have worked on. I would like to suggest these judgments are likely useless for changing what happened.
We need to know when harms happen! But we need to have empathy for all of the people involved, instead of blame, judgment, and criticism, so that we can make requests that meet needs for all people. We need to see the reality of the technology and how it works, so that we can engage in truly creative problem solving, so that we can make requests that meet needs in real environments, with real constraints.
Some words I'm using in an uncommon way. For now, this project is following the influence of Non-Violent Communication (NVC), a mediation technique that helps parties come to agreement by seeing each other differently using empathy, instead of trying to find some nebulous "compromise" where each side feels like they lose an equal amount. Both sides win, or everyone does something else entirely.
For now, we will be avoiding judgment, blame, and criticism - any negative or positive evaluations. Nothing is "good" or "bad", or "right" or "wrong". There is only what we can observe, how people feel, and the impact on their universal human needs. Hopefully we can transform enemy images, and work towards a likely path for useful change.
In the future, this project may not be using NVC, and might not even be approaching the issue with mediation in mind. Maybe there is a need for more protest, outrage, and righteous anger, but for now that is a direction I am actively steering away from. Coerced change is unlikely to be durable.
The goal is not to teach the world NVC. However, to be able to help mediate the dispute between those harmed by technology and those building tech, it's important to know the basics. If you want to learn more, there are websites, books, classes, meetups, and other resources for that.
Yes and no. The key is that we are not deciding if anything is harmful - people using the technology are. We're observing when they either talk about feelings that indicate that needs are not met or directly say which need is not met due to the use of the technology. That person is definitely making a judgment, but it is about their own feelings and needs. Unfortunately, not everyone is versed in language of feelings and needs, we might need to make empathetic guesses based on what we can observe, and state that we are making guesses, and attempt to get our guesses checked.
Our concern is human well-being. News articles describing upset about technology come out every day. Once the media cycle is over, it's difficult to track the concern. What happened? What was the event or situation? What was the value, and what was the harm? If we don't have a detailed understanding of the problem, it will be difficult to create (or evaluate!) solutions.
We're going to take references to specific events and actions about a technology, organization, or process, and make a list of harms and values described in it.
We'll start with references - observed, specific, reported events, actions, and harms. From a reference page, we create abstracts with the main harms and values linked out to their respective pages. From some references, we can gather categories of harms based on research.
There are a few audience & contributor bases for TechHarms, separated by steps of the process.
First, we start with what exists before this project.
There are a lot of people out there!
TechHarms can fit in here, to help build understanding.
A Tech Researcher will see the reporting and decide it needs further documentation, and will use their background in Journalism, Industry Research, Academic Research, Law, Activism, or other disciplines. (If you consider yourself a Tech Researcher and don't consider yourself represented, we would like to hear! Contact us!) Researchers start to make sense of how this event or situation fits in with the larger history of technology impact.
A Tech Creator could have background in Product, Design, Engineering, Marketing, Sales, Operations, Support, Management, or other disciplines. (If you consider yourself a Tech Creator and don't consider yourself represented, we would like to hear! Contact us!) Creators can see how products, features, processes, and designs work in general, based on the above observations
An audience we do not explicitly target, but who will inevitably be attracted to the site are tech consumers who are concerned with the harms of products in their lives.
Not news anymore, but a simple example we can use to understand how the process might work (for now!)
Many high-circulation international news sources reported on this incident, several citing the same local Seattle news source, KIRO 7.
"My husband and I would joke and say I'd bet these devices are listening to what we're saying," said Danielle, who did not want us to use her last name.
Every room in her family home was wired with the Amazon devices to control her home's heat, lights and security system.
But Danielle said two weeks ago their love for Alexa changed with an alarming phone call. "The person on the other line said, 'unplug your Alexa devices right now,'" she said. "'You're being hacked.'"
That person was one of her husband's employees, calling from Seattle.
"We unplugged all of them and he proceeded to tell us that he had received audio files of recordings from inside our house," she said. "At first, my husband was, like, 'no you didn't!' And the (recipient of the message) said 'You sat there talking about hardwood floors.' And we said, 'oh gosh, you really did hear us.'"
Danielle listened to the conversation when it was sent back to her, and she couldn't believe someone 176 miles away heard it too.
"I felt invaded," she said. "A total privacy invasion. Immediately I said, 'I'm never plugging that device in again, because I can't trust it.'"
(A reminder analysis that is absent blame, judgment, or criticism is very difficult, but essential for all parties to work together. This might not go perfectly, but is a goal we strive, correct, and iterate towards, with a concern for the well being and needs of everyone involved.)
An Amazon Alexa Speech Recognition unit (unknown version) in a home had an Unintended Activation, resulting in Surreptitious Recording of a Private Conversation, with Unintended Disclosure by sending the recording to a contact in the owner's device-accessible Address Book.
"Unintended Activation" in this case means no Consent, and either no reported or unacknowledged Feedback.
What is the probability of a recognizer misinterpreting an activation command in a corpus of normal speech per hour?
How many hours of speech happen in hearing range of the devices?
(this would be cross linked or embedded from the Alexa functionality description. It should have links to product manuals, and where available, source code.)
This is in progress.
This is going to require people from different backgrounds working together productively and harmoniously to achieve the desired goal. We're going to need the people who have used the technologies and suffered harms to trust that they can talk about this without repercussions. We need technology creators to feel safe describing what they do and how they do it, so that we can understand better. We need researchers to be able to share their results and feel heard, regardless of the results.
To help everyone feel safe working together, the project and all work towards it has a requirement of avoiding Blame, Criticism, Judgment. We are going to generally be avoiding "Right / Wrong" language, including "Good / Bad" evaluations, or uses of Should Statements.
Instead, we will focus on the needs people are trying to meet. Producer needs, Consumer needs. Collective needs, Individual needs. All people are trying to meet their needs.
Respect is a universal human need.
~~Care is a~~ (reword for business context)
Iterative: as good as needed based on resources available, adapt (constant feedback)
Resource-driven: scale up/down
Continuous: Infinite project. Mundane, ordinary, consistent progress.
Unlike many problems out there, this seems quite tractable. There are only a few thousand technologies to be covered - too much for an individual, but it would only take a few thousand people learning the process and researching and each making one page to be able to completely document the landscape. Even a few dozen people researching a few dozen technologies each should be able to achieve reasonable coverage.
Sound interesting? I could use your help! Please contact me!
Wiki - forthcoming
Discussion - forthcoming
Identity - no idea yet (pretty curious how that's going to play out)
First discussion! Which tools meet which needs?
This might not yet meet your needs. I would love to help get it there. Please mail me and let me know your thoughts.
CARE RESPECT NEEDS
try it, check! re-evaluate, iterate!
What I was asking them requires a literacy of expressing needs and an ability to understand one another's needs.
- Marshall Rosenberg, Living Nonviolent Communication
AVAILABLE FOR HIRE TO CONSULT FOR YOUR PROJECT