Monthly Archives: July 2018

Math

Yes, I’m about to talk about math. Real math. In my experience, a lot of folks back away when I bring this up, and I’m asking you not to, because that of God in everyone is getting lost in mathematical algorithms.

Seriously. Algorithms are affecting every part of our lives, and they’re perpetuating racist patterns while being sold as—and legally treated as—completely neutral. Quakers appear at statehouses, women’s marches, climate justice demonstrations, Black Lives Matter assemblies, and pride parades. We feel led to write minutes and send letters and pass around petitions. And I strongly suspect that, if we were open to it, we might also be called to witness in mathematics departments in universities.

Because I am not a mathematician, the concepts I’ll summarize below are not mine. For more information, solid scientific evidence, and a lot more detail, check out Propublica’s series called Machine Bias: Investigating Algorithmic Injustice or Cathy O’Neil’s book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.

And now, without further ado:

 

What Is An Algorithm?

An algorithm, for the purposes of this discussion, is a complicated math equation that can make assessments about the present—or predictions about the future—based on information gathered from the past.

An extremely simple mathematical algorithm might be used to predict the likely mean temperature in Anchorage, Alaska in December of 2020 based on the mean temperatures in Anchorage, Alaska in December in each of the last hundred years. A slightly more complex mathematical algorithm might take into account not just the temperatures themselves but the general trends in temperature changes in Anchorage in the last hundred years—is it getting generally warmer? cooler? or are temperatures oscillating?—and come to a more precise prediction based on that information. A very complex mathematical algorithm might take into account weather patterns throughout the entire world and the effects of climate change and come to an even more precise prediction.

A different example of an algorithm might be one used for university admissions. A set of data—such as SAT scores, GPA, and possibly a few other factors—is fed into an algorithm, which is a complex math equation, and the equation offers a prediction of the likelihood that the given student will succeed in that university.

The algorithm is the equation itself. Many algorithms are insanely complex. Especially when we’re talking about algorithms used by computers, it’s likely that you wouldn’t recognize one as math and wouldn’t have any idea what it was if you saw it.

 

How Are Algorithms Written?

As I understand it, even a mathematician or a computer programmer would be unlikely to be able to sit down and write even a moderately complex algorithm. Instead, humans write a piece of coding for a computer that teaches the computer how to write algorithms. Then, the computer is fed massive amounts of data until it has gathered for itself enough information to figure out how to make assessments of the present, or predictions about the future, based on data from the past.

This is much easier to understand with examples.

Suppose that you wanted a computer to recognize bone tumors in x-rays. You would do this by first feeding the computer thousands upon thousands of images containing bone tumors. You would show the computer where the tumors were. Over time, the computer would learn what particular types of x-ray shadows might indicate the presence of tumors, and it would write—for itself—an algorithm to recognize tumors. Eventually, you could give the computer any x-ray, and the computer could assess with reasonable accuracy whether a tumor were present.

(If you’re wondering about the accuracy rate, yes, computers are pretty accurate with this type of task. The highest level of accuracy, though, is a computer working in partnership with a human medical expert; the two working together detect tumors with more accuracy than either the computer alone or the human expert alone.)

Here’s another: suppose you were holding parole hearings, and you wanted to know the likelihood that a particular individual would reoffend if released. You would do this by first feeding the computer thousands upon thousands of records of people who had been arrested, imprisoned, and released on parole. Over time, the computer would learn various indicators that affected the likelihood that a person would be arrested again after parole. Eventually, you could give the computer any record of any person up for parole, and the computer could predict the likelihood that the person would reoffend if released.

Except. No. It can’t.

 

Where’s the Racism Part?

In the bone tumors example, the computer is only able to find new bone tumors because it has been taught accurately about past bone tumors. Computers are very intelligent in some ways, but if you fed the computer thousands upon thousands of pictures of kitty-cats and identified them as bone tumors, the computer would then, in the future, identify all kitty-cats as bone tumors and probably wouldn’t know an actual bone tumor from a turnip truck.

And in the parole example, we’re identifying kitty-cats as bone tumors. For one thing, the computer actually has no idea—even when reading past data—which individuals reoffended and which ones didn’t. What it knows is which individuals were arrested again and which ones weren’t. And we know from any number of studies that people of color are more likely than white people to be arrested, even if the two are engaged in identical behaviors.

But the computer doesn’t understand this. The computer also doesn’t care. Its job is to create an algorithm that can predict the chances of re-arrest, and it does this task amorally. It will use any data we give it, including level of education, zip code, and economic status, all of which are heavily influenced by race. Then it spits out a score. That score—just a number—is all the judge sees come out of the algorithm. The judge doesn’t know why the score was given. Even the computer itself doesn’t know why the score was given. There’s no explanation attached.

Famously, one computer—experimenting with correlations, but not actually building algorithms at the time—discovered that the level of margarine consumption in the United States is an excellent predictor of the divorce rate in Maine. When the United States eats more margarine, more people in Maine get divorced. When the United States eats less margarine, fewer people in Maine get divorced. No rational person claims that this is a direct causal relationship, but a computer wouldn’t hesitate to do so. Artificial intelligence doesn’t know what’s absurd.

So here’s the situation we’re in: a computer crunches a person’s data through a computer-generated algorithm and makes a prediction based on zip code or education or who-even-knows-what, and in many states, judges are using these scores to grant—or deny—parole.

Oh—and why are people of color more likely to be arrested than white people, even if they’re engaged in identical behaviors? One reason is because neighborhoods containing a higher percentage of people of color are more heavily policed. And why are these neighborhoods more heavily policed? Because police distribution is frequently determined by algorithm. The math itself is perpetuating a racist cycle.

 

Why Aren’t These Algorithms Being Checked by People?

In theory, there are two ways that a person could check an algorithm, although the first way—the way most of us might think would be obvious—is actually impossible. You can’t just print out a computer-generated algorithm, look at it, and detect racism. Most of the time, a human being can’t even print out a computer-generated algorithm, look at it, and determine what it’s designed to do. It’s way too complicated. When computers are generating their own algorithms, they don’t bother generating them in such a way that people can read them. Why would they? Computers generate algorithms for their own use.

The only other way to check an algorithm is to test it, to check its predictions against reality and, if you like, for racial neutrality. And here’s where things get really interesting.

In the case of one algorithm that’s used in actual parole cases, the algorithm was checked—and adjusted—for racial neutrality. When it is accurate, it is race-neutral. That is, it predicts recidivism accurately at the same rate for white people and for people of color.

But when it is inaccurate, it is not race-neutral. It is significantly more likely to predict that a person of color would be arrested again when they actually would not, and it is significantly more likely to predict that a white person would not be arrested again when they actually would.

The algorithm can’t be adjusted to be race-neutral both in its accuracies and in its inaccuracies at the same time because the data it’s working from is racist data, gathered from a racist society.

 

Are There Laws About This?

So far, not really. There’s an argument to be made (and I’ve heard lawyers make it) that there would be legal accountability for mathematical algorithms if anybody ever challenged one in court, but to my knowledge, that hasn’t happened yet. And no, there’s no legal accountability written into law right now for what comes out of a mathematical algorithm. And this is a problem, not just because many algorithms are perpetuating inequality but also because they are sold as neutral. Algorithms are sold as the solution to racism and other forms of bias. If we just take the human influence out of things, we will have solved these problems, right? How could math do anything but make a neutral, just, and rational recommendation?

But of course, this goes back to the fact that if the data’s not neutral, just, and rational, the algorithm’s not neutral, just, and rational, and neither is anything that comes out of it. And so far, nobody’s legally accountable. Not the computer programmers, not the companies that own the intellectual property and sell the use of the algorithms, and not the companies or the people that put the algorithms into practice. And certainly not the computers themselves.

 

What’s the State of the Ethics Conversation?

If anyone’s reading this who knows more about it than I do, I’d welcome updates and additional information here. But as far as I can tell, the ethics conversation hasn’t gotten very far.

There are some computer programmers and mathematicians who are talking about this. There are some people who are raising the alarm. But there aren’t very many of them.

I also question the quality of the overall ethics conversation. About a year ago, I attended an event at Columbia University. A statistics professor—extremely highly regarded in his field—was giving a speech about the ethical use of statistics. It lasted ninety minutes. For the first half-hour, I was thoroughly enjoying myself. The man was charming. But in the last hour, I realized that he wasn’t saying anything. Here was a person considered to be an ultimate authority in his field, giving a lecture about ethics to his students—theoretically, soon to be the next generation of authorities in the field—and he wasn’t prepared to say whether anything was or wasn’t ethical. He danced around a number of issues, feinted at commenting, made genuinely funny jokes, and then backed away without committing himself or challenging anybody.

Even if the ethics conversation in university math departments were robust, there’s an additional problem because the work done by programmers and statisticians and mathematicians often doesn’t belong to them. Many times, it’s a corporation that owns the intellectual property rights, which means that ultimately, the decisions about the use of such algorithms are made by managers, CFOs, and CEOs who may not even understand the ethical subtleties (and, of course, also might not care).

By the time the algorithm gets to an end user, it’s several steps removed from the human beings that originally worked with the computer that generated it, and whatever warnings or provisos those programmers originally made might very well never make it to the customer.

And then, of course, there’s the lawmakers, who’ve done little if anything about algorithms so far, though New York City is trying. I can imagine a number of reasons for this, starting with the fact that the lawmakers themselves may not know or understand the ways in which math is functioning. Even if they did, it’s hard to explain to a constituency, especially in a sound bite, why you’re focusing on computer programming “rather than” social justice, even if the truth is that computer programming is at the heart of what threatens social justice. And finally, there’s the matter of speed. How do you write laws to govern algorithms when artificial intelligence is developing so quickly that new technology goes from state-of-the-art to archaic in the span of a year or two?

 

Exactly How Widespread Are Algorithms?

Artificial intelligence-generated algorithms are affecting every part of your life: the ads you see online and physically in your neighborhood, the prices you pay (online and in stores), where you see police and where you don’t, your likelihood of being arrested, the jobs you get, the wages you’re offered, whether you receive loans, the types of insurance you’re eligible for, the rates you pay for insurance, which universities accept or reject you, the search results that pop up when you Google something, and what you see (and don’t see) on social media.

If you are poor or if you are a person of color, you often pay higher prices for the same goods or services even while you receive fewer jobs and less pay for the jobs you do get. You’re also more likely to be targeted by police and by a variety of predatory schemes to take your money. All of these trends are reinforced by algorithms that are being sold as “neutral.”

And no matter who you are or what your political views might be, search engines and social media are using algorithms to make sure that you only see information that reinforces your preexisting perspective.

 

What Can We Do?

For one thing, we can stop being afraid of math. Or if we can’t stop being afraid, we can act in spite of being afraid. We can educate ourselves. Start with Propublica’s Machine Bias: Investigating Algorithmic Injustice and Cathy O’Neil’s Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Both are easily accessible; you don’t have to be confident with math to understand them.

Beyond self-educating, we can pray, and we can act. If there is a field of social justice that’s being overlooked and needs a spotlight, this is it. I don’t know what activism looks like in this field. Nobody does. But we can discern, be led, try things, fail, and try again…

What I do know is that the answer isn’t “stop using technology.” For one thing, it’s too late.  Artificial intelligence is here to stay, and our choice as Friends is only whether or not to be involved with it conscientiously.  (We don’t even really have the choice of whether to be involved in it, since the algorithms produced by AI are, as explained above, deeply ingrained in every part of our lives.)

Even if we could opt out, I don’t believe we ought to.  When it comes to technology, I feel led to minister and witness, not to withdraw:

37017453_10155962469389086_522211651747840000_n
Harvard Business Review
36225046_10155922241529086_6928132898167980032_n.jpg
Truth in Recruitment

Are we, as Friends, prepared to deal with this complexity?  Are we willing to be?  How do we respond?

Grieving Abby Scuito

This post is a considerable departure from my usual, but it’s something I’ve been chewing on for months and seems worth talking about.

In 2003, a TV show called NCIS came on the scene. NCIS stands for Naval Criminal Investigative Service. It’s a shoot-‘em-up action crime show with heart, still in production after fifteen years, in my opinion because of the investigative-team-as-family theme at the center. Gruff ex-Marine Jethro Gibbs (Mark Harmon) is surrounded by a crew of coworkers, most of whom are at least a generation younger than he, nearly all of whom have serious trauma in their pasts, and between the moments of plot, Gibbs nurtures his little Gibblets (as fans refer to them) much like a tough-love dad.2c6b7d18524f70b7f139f3caa0b126bf

 

One of the show’s most popular characters is Abigail Scuito, generally known as Abby (Pauley Perrette). When the show premiered, the actress was about thirty-four, though the age of the character was never stated. Abby is a forensic scientist with all but superhuman abilities, beyond brilliant, though it helps that she’s working with technology that seems to be flavored with a soupçon of sci-fi. She’s also deeply religious and exuberantly loving and passionate about hard rock and tattooed and pigtailed.

AbigailSciutoAbby Scuito is so compelling that some researchers credit the character with an upswing in young women entering the sciences. They call it “the Abby effect.”

I really love Abby.

At the end of season fifteen, Pauley Perrette left NCIS. There are some pretty sad rumors around the question of why. I don’t want to repeat the rumors because the truth is that we just don’t know. Someday we might.

In the meantime, I am grieving Abby. (No, the character didn’t die, but it’s clear she’s not coming back, which amounts to the same thing.) And as I’ve mused about this character—warm, sassy, faithful, smart—did I mention smart?—I’ve realized something. It’s not really Abby that made Abby so special. It’s the people around her.

ihjC3nwvhsIx.jpg

Don’t get me wrong. I don’t want to take anything away from this character. She’s unique on TV and fascinating to watch. But sometimes people talk about her and say things like “the world needs more Abbys,” and I think the truth is, the world has a pretty fair number of Abbys. Our problem is that most Abbys get squished.

When Abby dresses authentically, her coworkers appreciate her individuality, while many women in the wabigail-e2809cabbye2809d-sciuto.jpgorld have to dress according to someone else’s code in order to be taken seriously.

When Abby teases her coworkers, they tease her back, then get down to business and listen to what she has to contribute, while many women in the world learn to play things straight because otherwise they won’t be taken seriously.

When Abby expresses her fears or shares about her personal life, her cow10a735d0c644c3a82621e7e58ce86dab.jpg.pngorkers respect her honesty and respond in kind, while many women in the world don’t dare show vulnerability because this will be interpreted as weakness.

When Abby excels, her bosses recognize that and offer raises and promotions, while many women are consistently passed over because raises and promotions are more closely tied to golf course relationships than professional capability.

When Abby is smarter than her coworkers, they admire her and act on her contributions, while many women experience jealousy and doubt from the people they work with.

This is why I’m grieving Abby. It’s because I’m mourning the example of a workplace that embraces her: a workplace where an intelligent and highly competent young woman is automatically taken seriously and regarded with respect, where contributions are fa5ad7e5f65a897d4a8adaac83c5b8b5--pauley-perrette-story-ideas.jpgevaluated on their inherent value, where ideas are not ignored until a man repeats them, where it’s possible to be emotional and still viewed as rational because everyone understands that these two states aren’t mutually exclusive.

The very existence of Abby, in the highly respected professional position that she held, was frankly a fantasy.

But it doesn’t have to be.

Here’s what I take from knowing Abby:

Do I want to be like her?  Yes, absolutely.  I find Abby inspiring.  I hope to be as smart, as hard-working, as kind, as authentic, as joyful, and as loving as she.

But more importantly, I want to be like Jenny Shepard and Leon Vance and Jethro Gibbs and Ducky Mallard and Tony/Ziva/Cait/Tim/Jimmy.  I want to recognize and value all of those who cross my path for their authentic contributions to the community.  And given the world we live in, that means working really hard to see and put aside my own prejudices and–sometimes–to point to people who are being overlooked and, when necessary, amplify their voices.

I’ll miss you, Abby.