“I think my Facebook has been hacked,” my mother told me recently. “Someone is sending messages from my account.” I poked around for a bit and found the problem, which turned out to be the combination of a weak password—which we fixed—and sloppy “friending” practices, which we’re still working on. When my mother calls for computer help, I’m happy to teach her what I can. After all, she taught me how to use a spoon and a toilet, which are much less complicated.

She’s not the most sophisticated computer user (Sorry, Mom). But should she have to be? Shouldn’t we expect more from the companies and service providers to whom we trust our personal information?

This week we learned about a data mining firm acquiring the records of 50 million Facebook users. Facebook knew about the issue for two years, but said nothing until a whistleblower decided to make the news public. There’s a pretty strong argument that Facebook doesn’t see us as customers but as products to be sold to advertisers, political campaigns, and others. And it’s worth remembering that we pay nothing to use the social network. In fact, we voluntarily provide it with all of the personal data it gathers about us—much of the data from this particular leak had been collected from personality quizzes taken by users. For all our concerns about computer and network security, we’re pretty lax about what we share online.

The Guardian, one of the newspapers which broke the story, wrote about recommended steps you can take to protect your Facebook data, and The Verge wrote about how to use Facebook while giving it the minimum possible amount of data. (This means you, Mom.) But the very nature of the site means you’re giving up some privacy just by using it.

Another option is to delete your account entirely, but you need to decide for yourself whether the site is worth the price of admission. For now, at least, Facebook remains a useful tool for nonprofits because so many of the people who make up their audiences use the network.

It’s not the only company with questionable data practices. MoviePass, the subscription service that lets you purchase a movie ticket each day for a flat monthly fee, has two million subscribers and counting—but it sees its users’ personal information as a product to be sold. The CEO recently said that MoviePass knows an “enormous” amount about its subscribers, who it tracks using the mobile app and GPS on their phones.

“We watch how you drive from home to the movies,” he said. “We watch where you go afterwards.” With that information, the company can direct subscribers to restaurants before or after movies in exchange for a cut from vendors.

Do you have a smart speaker like an Amazon Alexa or Echo? Do you ask Siri questions or look things up on Google? Do you watch movies or read books recommended by Netflix or Amazon? Each of these tools is a form of Artificial Intelligence that works by using something called “machine learning,” which has a lot in common with how human beings learn. The more information they gather, they better they get at answering questions and predicting and contextualizing things.

Artificial Intelligence is, increasingly, a part of our lives, from our email programs learning to detect spam to cars that brake automatically. We’re not always aware of it—even when we’re interacting with itbut sometimes it makes itself known all too well.

At TechCrunch, Ryan K suggested that Silicon Valley is robbing the world of the possible benefits of AI by hogging it for money-making ventures such as data-mining. Maybe it’s for the best. Writing for Quartz, Ana Santos Rutschman said that AI made it possible for Stephen Hawking—the theoretical physicist who died this month from complications related to ALS—to have a voice, and one of the things he used it for was to warn us against the dangers of AI.

For some people, the benefits of all these programs might still be worth the cost, even if the cost isn’t entirely obvious. But again, should we expect more from the organizations to whom we entrust our personal data?

In New Orleans, Palantir—a Silicon Valley data mining firm funded in part by the CIA’s venture capital arm—has been quietly testing predictive policing technology without the knowledge of the public, or even the city council members.

“Cities around the country have recently begun to grapple with the question of if and how municipalities should regulate data sharing and privacy,” Ali Winston wrote. “Some cities like Seattle and Oakland have passed legislation establishing committees to craft guidelines and conduct oversight,” while others are discussing what role city governments should play regarding privacy in the digital age.

It’s not just cities. Other countries are struggling with the same issues. Writing on the Beaconfire Red blog, Lynn Labieniec said that some are already cracking down to protect people’s data as European nations implement the General Data Protection Regulations (GDPR), laws designed to make sure people have control over their personal information and what it is being used for. So far, the U.S. government seems to be taking a different view, but GDPR laws might affect your nonprofit if you have European constituents in your databases.

What does all of this mean for your organization?

At Idealware, we talk a lot about data and whether nonprofits are collecting the right data in all the right ways, and whether they’re using it to inform their decisions. In a sense, data is just another word for “information,” or “knowledge,” and knowledge is power. Which means that it’s not enough to collect and use data—we have to do so responsibly. As nonprofits, it’s on us to treat what we know about our constituents carefully, and to protect it against misuse.

That’s what people expect from the organizations they trust with their personal data.

See you next month…

Chris

P.S.
Thank you to everyone who sent me suggestions for this month’s roundup of links. As always, if you come across something you think would be a good fit for the Best of the Web, send it to me at info@idealware.org.