Europe: Journalists Against Free Speech

Judith Bergman
Gatesstone Institute


  • Gone is all pretense that journalism is about reporting the facts. These are the aims of a political actor.
  • Being bought and paid for by the EU apparently counts as “press freedom” these days.
  • According to the guidelines, journalists should, among other things, “Provide an appropriate range of opinions, including those belonging to migrants and members of minorities, but… not… extremist perspectives just to ‘show the other side’…. Don’t allow extremists’ claims about acting ‘in the name of Islam’ to stand unchallenged…. where it is necessary and newsworthy to report hateful comments against Muslims, mediate the information.”

The European Federation of Journalists (EJF), “the largest organization of journalists in Europe, represents over 320,000 journalists in 71 journalists’ organizations across 43 countries,” according to its website. The EJF, a powerful player, also leads a Europe-wide campaign called “Media against Hate.”

The “Media against Hate” campaign aims to:

“counter hate speech[1] and discrimination in the media, both on and offline… media and journalists play a crucial role in informing…policy … regarding migration and refugees. As hate speech and stereotypes targeting migrants proliferate across Europe… #MediaAgainstHate campaign aims to: improve media coverage related to migration, refugees, religion and marginalised groups… counter hate speech, intolerance, racism and discrimination… improve implementation of legal frameworks regulating hate speech and freedom of speech…”

Gone is all pretense that journalism is about reporting the facts. These are the aims of a political actor.

A very large political actor is, in fact, involved in the “Media against Hate” campaign. The campaign is one of several media programs supported by the EU under its Rights, Equality and Citizenship Programme (REC). In the REC program for 2017, the EU Commission, the EU’s executive body, writes:

“DG Justice and Consumers [the EU Commission’s justice department] will address the worrying increase of hate crime and hate speech by allocating funding to actions aiming at preventing and combating racism, xenophobia and other forms of intolerance… including dedicated work in the area of countering online hate speech (implementation of the Code of Conduct on countering illegal hate speech online)… DG Justice also funds civil society organisations combatting racism, xenophobia and other forms of intolerance”.

This political player, the EU, the biggest in Europe, works openly at influencing the “free press” with its own political agendas. One of these agendas is the issue of migration into Europe from Africa and the Middle East. In his September State of the Union address, the president of the EU Commission, Jean-Claude Juncker, made it clear that whatever Europeans may think — polls repeatedly show that the majority of Europeans do not want any more migrants — the EU has no intention of putting a stop to migration. “Europe,” Juncker said, “contrary to what some say, is not a fortress and must never become one. Europe is and must remain the continent of solidarity where those fleeing persecution can find refuge”.

The European Union’s REC Program also recently financed the publication of a handbook with guidelines for journalists on how to write about migrants and migration. The guidelines form part of the RESPECT WORDS project — also EU-financed — which “aims to promote quality reporting on migrants and ethnic and religious minorities as an indispensable tool in the fight against hate”. The new guidelines are “aimed at strengthening quality media coverage of migrants and ethnic and religious minorities”. The handbook was launched on October 12 by the International Press Institute (IPI) — an association of media professionals” representing leading digital, print and broadcast news outlets in more than 120 countries. IPI boasts that it has been “defending press freedom since 1950”. (Being bought and paid for by the EU apparently counts as “press freedom” these days.) Seven other European media outlets and civil society groups based in Europe participated in the project and presented it at an event at the European Parliament in Brussels attended by MEPs and civil society experts. According to the press release, the guidelines are “supplementary to standards already in place at news outlets”.

The guidelines state that, “journalism cannot and should not ‘solve’ the problem of hate speech on its own” but that it can help to prevent its “normalisation”. However, “meeting this challenge requires the involvement of many actors, in particular the European Union, which must reinforce existing mechanisms and support new tools designed to combat hate speech…”

Why do journalists, who claim to fight for the freedom of the press, now appeal to the EU to help bring an end to freedom of speech in Europe?

According to the guidelines, journalists should, among other things:

“Provide an appropriate range of opinions, including those belonging to migrants and members of minorities, but… not… extremist perspectives just to ‘show the other side’… Avoid directly reproducing hate speech; when it is newsworthy to do so, mediate it by…challenging such speech, and exposing any false premises it relies on. Remember that sensitive information (eg race and ethnicity, religious or philosophical beliefs, party affiliation or union affiliation, health and sexual information) should only be mentioned when it is necessary for the public’s understanding of the news”.

Continue reading

Advertisements

YOU’RE TERMINATED Vladimir Putin warns of future sci-fi super-human soldiers more ‘destructive than nuclear bombs’ who feel no fear or pain

Mark Hodge
UK Sun

VLADIMIR Putin has claimed genetically-modified super soldiers “worse than a nuclear bomb” could soon become a reality.

The strongman Russian President spoke to a crowd of students about the prospect of an army of trained killers incapable of feeling “pain or fear” much like the characters in 1992 action movie Universal Soldier.

He revealed that scientists are close to breaking the genetic code which would enable them to create “a human with pre-designed characteristics”.

Speaking at a youth festival in Sochi, Putin warned of the consequences of playing God with man’s genetic code, reports The Express.

He said: “A man has the opportunity to get into the genetic code created by either nature, or as religious people would say, by the God.

“All kinds of practical consequences may follow. One may imagine that a man can create a man not only theoretically but also practically.

“He can be a genius mathematician, a brilliant musician or a soldier, a man who can fight without fear, compassion, regret or pain.

“As you understand, humanity can enter, and most likely it will in the near future, a very difficult and very responsible period of its existence.

“What I have just described might be worse than a nuclear bomb.”

The autocrat warned that world leaders must agree on regulations to control the creation of mass-killing super soldiers.

He said: “When we do something, whatever we do, I want to reiterate it again – we must never forget about the ethical foundations of our work.”

Last month, Putin revealed he is afraid humans in the future will be hunted and EATEN alive by flesh-munching robots.

The infamously icy-veined Russian leader showed his more anxious side while discussing artificial intelligence (AI) at an event in Moscow.

Former KGB spy Putin asked Arkady Volozh, chief of internet firm Yandex, when the technology will “eat us”, reports RT.

Volozh, who was giving Putin a tour of the company’s headquarters, appeared to be taken aback by the question.

At first he replied: “I hope never.”

But after a pause, he used the analogy of excavators and explained that they are better at digging than people.

The computer boffin then said: “But we don’t get eaten by excavators.”

Yet Putin dismissed this comment by adding: “They don’t think.”

Earlier this month, the election-hacking autocrat said that AI was “the future, not only for Russia, but for all humankind.”

While saying the burgeoning technology had “colossal opportunities”, Putin added “whoever becomes the leader in this sphere will become the ruler of the world.”

And keen to avoid a Cold War-style arms race, the Russian President claimed he would share his country’s “know-how” with other nations.

Microsoft rep calls Catholic Church a key ally in protecting kids online

Catholic News Agency

.- The head of Microsoft’s office for online safety has said the Catholic Church is a key ally in the ongoing effort to protect children from sexual abuse and exploitation online.

When asked why a major tech company would partner with the Catholic Church on such an important issue, Jacqueline Beauchere, Chief Online Safety Officer for Microsoft Inc., had a simple response: “why not?”

Beauchere spoke during an Oct. 3-6 conference on Child Dignity in the Digital World, addressing the topic of “How Do Internet Providers and Software Developers Define Their Responsibility and Limits of Cooperation Regarding Safeguarding of Minors.”

Speaking with a small group of journalists at the conference, Beauchere said, “why would you not take advantage of such a huge platform and such a huge array of people to make people aware of the situation?”

Beauchere said she is willing to collaborate with “anyone who wants to talk about these issues,” because “we all can learn from one another. And the only way we’re going to get better, the only way we’re going to do and learn more is to really expand the dialogue.”

She also spoke about the future steps and investments technology companies can make in helping to fight online child exploitation, and action-points for the future, including some highlights from a joint-declaration from conference participants that will be presented to Pope Francis in an audience tomorrow.

Beauchere was one of two representatives of major tech organizations present at the conference, the other being Dr. Antigone Davies, Head of Global Safety Policy for Facebook.

Organized by the Pontifical Gregorian University‘s Center for Child Protection in collaboration with the UK-based global alliance WePROTECT and the organization “Telefono Azzurro,” which is the first Italian helpline for children at risk.

Vatican Secretary of State Cardinal Pietro Parolin opened the conference as a keynote speaker. Other participants in the congress include social scientists, civic leaders, and religious representatives. Discussion points include prevention of abuse, pornography, the responsibility of internet providers and the media, and ethical governance.

Please read below for excerpts of Beauchere’s conversation with journalists:

Thank you for your time. It was very interesting to hear what Microsoft is doing to combat this issue. But many speakers that followed you said that more could be done as far as investments and money being put into helping in NGOs that are working to help in this issue, and technologies that can be put into fighting this issue. What is your response? What can be done in the future to address this call to action?

I would say the biggest room in the world is the room for improvement, and we can all do more. We can all do better. We just have to determine what is going to be the best root to direct our resources. So we come at the at the problem from a technology perspective, from an internal governance perspective with policies and standards and procedures, with education and with partnerships. We are already supporting a number of organizations, which I noted in my remarks. We are on the board for the International Center for Missing and Exploited Children, I personally sit on the board of the WeProtect organization. I sit on the board of the In Hope organization, I used to sit on the board, now another colleague does, of the Technology Coalition. That’s all technologies coming together to come up with technical solutions, other operational means, to alleviate the problem. So there are many things we are dong, it’s a question of we have so precious few resources – we’re given budgets like every one ounce. We don’t get an unlimited pot of money, so we have to decide where are we going to put our efforts and what is going to deliver the most bang for the buck.

And where do you see this money being used most importantly?

I think efforts like this that really bring together a multitude of stakeholders. As I said, technology companies work together. Sometimes I feel like I work and talk to Twitter and Google and YouTube and Facebook more so in a week than I do with my own colleagues at Microsoft, so we’re always working together. Civil society works together. Academia works together. Government works together. But now we need to bring all of those stakeholders together. WeProtect started that effort, but I could say that there are really only four stakeholder groups there: that would be the technology companies, governments, law enforcement and civil society. But now with this world congress we’re expanding to include the Church and faith-based organizations, to include a broader array of academics, to include the public health sector. Now, with more people it could sometimes present a little bit more conflict, or hiccups or hurdles that we’re going to have to get over, but we’re going to have to find a way that we’re all going to have to agree on certain things, and then build from there.

On a practical level, you’ve spoken about all the boards and committees that you are a part of, and it’s really important to be a part of that conversation, but if you were going to tell me now where you are going to allocate your resources next as the frontier of where to fight this issue, where do you see the challenges and problems? Where should that money be allocated?

It has to be invested in technology. But technology investments don’t pay off immediately, they take time. So a lot of people are asking, ‘can’t you just invent a technology that can determine that that’s a child sexual abuse image, and then it won’t be uploaded from the get-go?’ This is artificial intelligence, this is machine learning, it’s only been in recent years that we’ve been able to identify, via artificial intelligence and via machine learning, that a cat is a cat. So when you put in the complex scenarios of the parade of horribles that could happen to a child, and the different actors that are involved in those scenarios and the different body parts, and the different scenes and places where things could happen as far as these crimes, you’re adding so much more complexity. So there’s a lot of work. These technology investments are not going to pay off immediately. I think people look at technology and they think it’s a silver bullet, they think that technology created these problems, so technology should fix them. Number one, technology didn’t create these problems, and number two, technology alone cannot solve them. So technology investments are key, but they’re not going to pay off immediately. So these kinds of efforts that are multi-party, multi-focused, multi-pronged and faceted, that’s where we need to put our efforts and I think the money will follow. The money will follow what proves the most successful or will at least show the most promise.

In terms of investment, many of the speakers addressed or were from areas of the world that are not as developed in technology, but are starting to gain access to the internet and don’t have the background or the education about what it can do. In terms of investment, do you guys have plans to address this issue in some of these nations that are not as developed?

We have educational and awareness raising resources available everywhere. Personally I see the developing world as an opportunity. Yes they are gaining access to technology quicker, but they have the ability to learn from the Western world and the mistakes that we made, and they have the ability and the opportunity to do things right from the ground up. They just can’t let the technology get ahead of them, they have to really incorporate the learning and the awareness raising and some of the good, healthy practices and habits, developing those habits for going online and keeping oneself and one’s family safe. But I see it as more of an opportunity than as a problem.

You mentioned that you are also trying to broaden your network of allies in fighting this issue, so why broaden it to faith-based organizations, why come to a Jesuit university to participate in this conference?

I say why not? Why would you not take advantage of such a huge platform and such a huge array of people to make aware of the situation. These are very difficult conversations to have. People don’t want, whether it’s people in government or elsewhere, they don’t want to acknowledge that these issues exist. It’s a very delicate topic, it’s a very sensitive topic, in some instances it’s taboo, so it’s been very refreshing to have a new outlet, to have a new audience, to potentially involve new stakeholders, and to see how people are coming to the issue and addressing it very directly, and very head-on, and being very open and transparent about what’s happening in their countries, and about how serious these situations and these issues are. So I will collaborate, I will work with anyone who wants to talk about these issues, we all can learn from one another. And the only way we’re going to get better, the only way we’re going to do and learn more is to really expand the dialogue.

You mentioned that a lot of people say that it’s all technology’s fault. So what can technology do to help in the issue and what should people perhaps take into their own hands?

People need to own their own presence online and they need to know what they are doing. They need to safeguard their own reputation. So there are certain habits and practices that they could develop, we offer a wealth of materials on our website. One thing I want to point out about people and their own learning is sometimes, unfortunately, that leaning comes a little bit too late. We were discussing this in my workshop. It’s been my experience that what drives people to action, and I’m talking about pro-action, is something bad happening to them. Their identity has been stolen, so now I need to go figure out how to protect myself from identity theft. A child’s been bullied, now I need to go figure out what’s been happening with online bullying. Unfortunately we want to galvanize people and rally them to take some proactive steps to safeguard their reputations, to know who and with whom they are talking, to know what they are sharing online, to be discreet where discretion is warranted. That’s not suppressing the kinds of engagements, and connections and interactions they want to have, but that’s doing so with eyes wide open, and that’s doing so with a healthy dose of reality and of what could potentially go wrong and of being aware of risks. I know there was a first part to your question…

What can technology do when it comes to this issue, but what are it’s limits?

Well technology can always help, and we tell people to get help from technology. So technology can help determine for instance, what parents want their kids to see online, what websites they want them to go to, who they want them to communicate with. Some people call them “family controls,” at Microsoft we call them “family safety settings.” And they’re right there in your Windows operating system, in your Xbox live console, so that is our obligation, that is our obligation as a technology company, t put those kinds of tools and resources into the product itself to help people, and to give them the tools they need to better educate themselves, make them aware of these issues, and to hopefully get them to want to teach others, to inform others. So it very much is a multi-stakeholder issue, it’s everyone’s problem and it’s everyone’s opportunity.

Are you going to the meeting with Pope Francis tomorrow?

Absolutely. I wouldn’t miss it for the world.

Are you Catholic?

Yes, I am. I spoke with my priest before I came here, because I was a bit overwhelmed.

What do you expect from that meeting, what do you hope is going to come out of that meeting tomorrow with the Pope?

Well he’s going to be presented with this declaration, which is a series of commitments, or calls to action, for every stakeholder group who was present at this congress, and it has the ability to be monumental. I really hope there is a follow-up and follow-through, because I have attended things like this before, not of this magnitude, where everyone is so excited and so jazzed to take this forward, and there’s very little follow-up and follow-through, and I personally am someone who always wants to do more and to continue. I don’t sign up to anything, I don’t commit to anything unless I’m going to be fully in.

In many ways Pope Francis has helped put climate change and immigration into the minds of policy makers. Do you think he has the ability to put the protection of minors up there?

Of course, of course.

Some have said there is perhaps anti-Catholic, anti-religious sentiment in Silicon Valley. Will they listen to the Church on this?

Well, we’re not in Silicon Valley, so I can’t attest to what’s going on in Silicon Valley, but I personally don’t see it. When I told my manager, my boss, that I had the ability to come here, he said, ‘get me an invitation, too.’ That was very wonderful to hear, and I did get him an invitation, but unfortunately he changed roles and he didn’t think it was particularly relevant for him to come and though that since he’s not in the same role perhaps he should not. So I’m the only one here for Microsoft, but I’m here.

‘Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia

Google, Twitter and Facebook workers who helped make technology so addictive are disconnecting themselves from the internet. Paul Lewis reports on the Silicon Valley refuseniks alarmed by a race for human attention

The Guardian

Justin Rosenstein had tweaked his laptop’s operating system to block Reddit, banned himself from Snapchat, which he compares to heroin, and imposed limits on his use of Facebook. But even that wasn’t enough. In August, the 34-year-old tech executive took a more radical step to restrict his use of social media and other addictive technologies.

Rosenstein purchased a new iPhone and instructed his assistant to set up a parental-control feature to prevent him from downloading any apps.

He was particularly aware of the allure of Facebook “likes”, which he describes as “bright dings of pseudo-pleasure” that can be as hollow as they are seductive. And Rosenstein should know: he was the Facebook engineer who created the “like” button in the first place.

A decade after he stayed up all night coding a prototype of what was then called an “awesome” button, Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.

These refuseniks are rarely founders or chief executives, who have little incentive to deviate from the mantra that their companies are making the world a better place. Instead, they tend to have worked a rung or two down the corporate ladder: designers, engineers and product managers who, like Rosenstein, several years ago put in place the building blocks of a digital world from which they are now trying to disentangle themselves. “It is very common,” Rosenstein says, “for humans to develop things with the best of intentions and for them to have unintended, negative consequences.”

Rosenstein, who also helped create Gchat during a stint at Google, and now leads a San Francisco-based company that improves office productivity, appears most concerned about the psychological effects on people who, research shows, touch, swipe or tap their phone 2,617 times a day.

There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”

But those concerns are trivial compared with the devastating impact upon the political system that some of Rosenstein’s peers believe can be attributed to the rise of social media and the attention-based market that drives it.

Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have completely upended the political system and, left unchecked, could even render democracy as we know it obsolete.

In 2007, Rosenstein was one of a small group of Facebook employees who decided to create a path of least resistance – a single click – to “send little bits of positivity” across the platform. Facebook’s “like” feature was, Rosenstein says, “wildly” successful: engagement soared as people enjoyed the short-term boost they got from giving or receiving social affirmation, while Facebook harvested valuable data about the preferences of users that could be sold to advertisers. The idea was soon copied by Twitter, with its heart-shaped “likes” (previously star-shaped “favourites”), Instagram, and countless other apps and websites.

It was Rosenstein’s colleague, Leah Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, who announced the feature in a 2009 blogpost. Now 35 and an illustrator, Pearlman confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.

“One reason I think it is particularly important for us to talk about this now is that we may be the last generation that can remember life before,” Rosenstein says. It may or may not be relevant that Rosenstein, Pearlman and most of the tech insiders questioning today’s attention economy are in their 30s, members of the last generation that can remember a world in which telephones were plugged into walls.

It is revealing that many of these younger technologists are weaning themselves off their own products, sending their children to elite Silicon Valley schools where iPhones, iPads and even laptops are banned. They appear to be abiding by a Biggie Smalls lyric from their own youth about the perils of dealing crack cocaine: never get high on your own supply.

One morning in April this year, designers, programmers and tech entrepreneurs from across the world gathered at a conference centre on the shore of the San Francisco Bay. They had each paid up to $1,700 to learn how to manipulate people into habitual use of their products, on a course curated by conference organiser Nir Eyal.

Eyal, 39, the author of Hooked: How to Build Habit-Forming Products, has spent several years consulting for the tech industry, teaching techniques he developed by closely studying how the Silicon Valley giants operate.

“The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.

He explains the subtle psychological tricks that can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation,” Eyal writes.

Attendees of the 2017 Habit Summit might have been surprised when Eyal walked on stage to announce that this year’s keynote speech was about “something a little different”. He wanted to address the growing concern that technological manipulation was somehow harmful or immoral. He told his audience that they should be careful not to abuse persuasive design, and wary of crossing a line into coercion.

But he was defensive of the techniques he teaches, and dismissive of those who compare tech addiction to drugs. “We’re not freebasing Facebook and injecting Instagram here,” he said. He flashed up a slide of a shelf filled with sugary baked goods. “Just as we shouldn’t blame the baker for making such delicious treats, we can’t blame tech makers for making their products so good we want to use them,” he said. “Of course that’s what tech companies will do. And frankly: do we want it any other way?”

Without irony, Eyal finished his talk with some personal tips for resisting the lure of technology. He told his audience he uses a Chrome extension, called DF YouTube, “which scrubs out a lot of those external triggers” he writes about in his book, and recommended an app called Pocket Points that “rewards you for staying off your phone when you need to focus”.

Finally, Eyal confided the lengths he goes to protect his own family. He has installed in his house an outlet timer connected to a router that cuts off access to the internet at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.”

But are we? If the people who built these technologies are taking such radical steps to wean themselves free, can the rest of us reasonably be expected to exercise our free will?

Not according to Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.”

Harris, who has been branded “the closest thing Silicon Valley has to a conscience”, insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.

A graduate of Stanford University, Harris studied under BJ Fogg, a behavioural psychologist revered in tech circles for mastering the ways technological design can be used to persuade people. Many of his students, including Eyal, have gone on to prosperous careers in Silicon Valley.

Harris is the student who went rogue; a whistleblower of sorts, he is lifting the curtain on the vast powers accumulated by technology companies and the ways they are using that influence. “A handful of people, working at a handful of technology companies, through their choices will steer what a billion people are thinking today,” he said at a recent TED talk in Vancouver.

“I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” Harris went public – giving talks, writing papers, meeting lawmakers and campaigning for reform after three years struggling to effect change inside Google’s Mountain View headquarters.

It all began in 2013, when he was working as a product manager at Google, and circulated a thought-provoking memo, A Call To Minimise Distraction & Respect Users’ Attention, to 10 close colleagues. It struck a chord, spreading to some 5,000 Google employees, including senior executives who rewarded Harris with an impressive-sounding new job: he was to be Google’s in-house design ethicist and product philosopher.

Looking back, Harris sees that he was promoted into a marginal role. “I didn’t have a social support structure at all,” he says. Still, he adds: “I got to sit in a corner and think and read and understand.”

He explored how LinkedIn exploits a need for social reciprocity to widen its network; how YouTube and Netflix autoplay videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging near-constant communication between its mostly teenage users.

The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a particular person”.

Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says. A company paying Facebook to use its levers of persuasion could be a car business targeting tailored advertisements to different types of users who want a new vehicle. Or it could be a Moscow-based troll farm seeking to turn voters in a swing county in Wisconsin.

Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.

A friend at Facebook told Harris that designers initially decided the notification icon, which alerts people to new activity such as “friend requests” or “likes”, should be blue. It fit Facebook’s style and, the thinking went, would appear “subtle and innocuous”. “But no one used it,” Harris says. “Then they switched it to red and of course everyone used it.”

That red icon is now everywhere. When smartphone users glance at their phones, dozens or hundreds of times a day, they are confronted with small red dots beside their apps, pleading to be tapped. “Red is a trigger colour,” Harris says. “That’s why it is used as an alarm signal.”

The most seductive design, Harris explains, exploits the same psychological susceptibility that makes gambling so compulsive: variable rewards. When we tap those apps with red icons, we don’t know whether we’ll discover an interesting email, an avalanche of “likes”, or nothing at all. It is the possibility of disappointment that makes it so compulsive.

It’s this that explains how the pull-to-refresh mechanism, whereby users swipe down, pause and wait to see what content appears, rapidly became one of the most addictive and ubiquitous design features in modern technology. “Each time you’re swiping down, it’s like a slot machine,” Harris says. “You don’t know what’s coming next. Sometimes it’s a beautiful photo. Sometimes it’s just an ad.”

The designer who created the pull-to-refresh mechanism, first used to update Twitter feeds, is Loren Brichter, widely admired in the app-building community for his sleek and intuitive designs.

Now 32, Brichter says he never intended the design to be addictive – but would not dispute the slot machine comparison. “I agree 100%,” he says. “I have two kids now and I regret every minute that I’m not paying attention to them because my smartphone has sucked me in.”

Brichter created the feature in 2009 for Tweetie, his startup, mainly because he could not find anywhere to fit the “refresh” button on his app. Holding and dragging down the feed to update seemed at the time nothing more than a “cute and clever” fix. Twitter acquired Tweetie the following year, integrating pull-to-refresh into its own app.

Since then the design has become one of the most widely emulated features in apps; the downward-pull action is, for hundreds of millions of people, as intuitive as scratching an itch.

Brichter says he is puzzled by the longevity of the feature. In an era of push notification technology, apps can automatically update content without being nudged by the user. “It could easily retire,” he says. Instead it appears to serve a psychological function: after all, slot machines would be far less addictive if gamblers didn’t get to pull the lever themselves. Brichter prefers another comparison: that it is like the redundant “close door” button in some elevators with automatically closing doors. “People just like to push it.”

All of which has left Brichter, who has put his design work on the backburner while he focuses on building a house in New Jersey, questioning his legacy. “I’ve spent many hours and weeks and months and years thinking about whether anything I’ve done has made a net positive impact on society or humanity at all,” he says. He has blocked certain websites, turned off push notifications, restricted his use of the Telegram app to message only with his wife and two close friends, and tried to wean himself off Twitter. “I still waste time on it,” he confesses, “just reading stupid news I already know about.” He charges his phone in the kitchen, plugging it in at 7pm and not touching it until the next morning.

“Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”

Not everyone in his field appears racked with guilt. The two inventors listed on Apple’s patent for “managing notification connections and displaying icon badges” are Justin Santamaria and Chris Marcellino. Both were in their early 20s when they were hired by Apple to work on the iPhone. As engineers, they worked on the behind-the-scenes plumbing for push-notification technology, introduced in 2009 to enable real-time alerts and updates to hundreds of thousands of third-party app developers. It was a revolutionary change, providing the infrastructure for so many experiences that now form a part of people’s daily lives, from ordering an Uber to making a Skype call to receiving breaking news updates.

But notification technology also enabled a hundred unsolicited interruptions into millions of lives, accelerating the arms race for people’s attention. Santamaria, 36, who now runs a startup after a stint as the head of mobile at Airbnb, says the technology he developed at Apple was not “inherently good or bad”. “This is a larger discussion for society,” he says. “Is it OK to shut off my phone when I leave work? Is it OK if I don’t get right back to you? Is it OK that I’m not ‘liking’ everything that goes through my Instagram screen?”

His then colleague, Marcellino, agrees. “Honestly, at no point was I sitting there thinking: let’s hook people,” he says. “It was all about the positives: these apps connect people, they have all these uses – ESPN telling you the game has ended, or WhatsApp giving you a message for free from your family member in Iran who doesn’t have a message plan.”

A few years ago Marcellino, 33, left the Bay Area, and is now in the final stages of retraining to be a neurosurgeon. He stresses he is no expert on addiction, but says he has picked up enough in his medical training to know that technologies can affect the same neurological pathways as gambling and drug use. “These are the same circuits that make people seek out food, comfort, heat, sex,” he says.

All of it, he says, is reward-based behaviour that activates the brain’s dopamine pathways. He sometimes finds himself clicking on the red icons beside his apps “to make them go away”, but is conflicted about the ethics of exploiting people’s psychological vulnerabilities. “It is not inherently evil to bring people back to your product,” he says. “It’s capitalism.”

That, perhaps, is the problem. Roger McNamee, a venture capitalist who benefited from hugely profitable investments in Google and Facebook, has grown disenchanted with both companies, arguing that their early missions have been distorted by the fortunes they have been able to earn through advertising.

He identifies the advent of the smartphone as a turning point, raising the stakes in an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”

That would be a remarkable assertion for any early investor in Silicon Valley’s most profitable behemoths. But McNamee, 61, is more than an arms-length money man. Once an adviser to Mark Zuckerberg, 10 years ago McNamee introduced the Facebook CEO to his friend, Sheryl Sandberg, then a Google executive who had overseen the company’s advertising efforts. Sandberg, of course, became chief operating officer at Facebook, transforming the social network into another advertising heavyweight.

McNamee chooses his words carefully. “The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.”

But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?

McNamee believes the companies he invested in should be subjected to greater regulation, including new anti-monopoly rules. In Washington, there is growing appetite, on both sides of the political divide, to rein in Silicon Valley. But McNamee worries the behemoths he helped build may already be too big to curtail. “The EU recently penalised Google $2.42bn for anti-monopoly violations, and Google’s shareholders just shrugged,” he says.

Rosenstein, the Facebook “like” co-creator, believes there may be a case for state regulation of “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies. “If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.”

James Williams does not believe talk of dystopia is far-fetched. The ex-Google strategist who built the metrics system for the company’s global search advertising business, he has had a front-row view of an industry he describes as the “largest, most standardised and most centralised form of attentional control in human history”.

Williams, 35, left Google last year, and is on the cusp of completing a PhD at Oxford University exploring the ethics of persuasive design. It is a journey that has led him to question whether democracy can survive the new technological age.

He says his epiphany came a few years ago, when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realisation: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?”

That discomfort was compounded during a moment at work, when he glanced at one of Google’s dashboards, a multicoloured display showing how much of people’s attention the company had commandeered for advertisers. “I realised: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he recalls.

He embarked on several years of independent research, much of it conducted while working part-time at Google. About 18 months in, he saw the Google memo circulated by Harris and the pair became allies, struggling to bring about change from within.

Williams and Harris left Google around the same time, and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. Williams finds it hard to comprehend why this issue is not “on the front page of every newspaper every day.

“Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.

The same forces that led tech firms to hook users with design tricks, he says, also encourage those companies to depict the world in a way that makes for compulsive, irresistible viewing. “The attention economy incentivises the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”

That means privileging what is sensational over what is nuanced, appealing to emotion, anger and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalise, bait and entertain in order to survive”.

In the wake of Donald Trump’s stunning electoral victory, many were quick to question the role of so-called “fake news” on Facebook, Russian-created Twitter bots or the data-centric targeting efforts that companies such as Cambridge Analytica used to sway voters. But Williams sees those factors as symptoms of a deeper problem.

It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.

Williams was making this case before the president was elected. In a blog published a month before the US election, Williams sounded the alarm bell on an issue he argued was a “far more consequential question” than whether Trump reached the White House. The reality TV star’s campaign, he said, had heralded a watershed in which “the new, digitally supercharged dynamics of the attention economy have finally crossed a threshold and become manifest in the political realm”.

Williams saw a similar dynamic unfold months earlier, during the Brexit campaign, when the attention economy appeared to him biased in favour of the emotional, identity-based case for the UK leaving the European Union. He stresses these dynamics are by no means isolated to the political right: they also play a role, he believes, in the unexpected popularity of leftwing politicians such as Bernie Sanders and Jeremy Corbyn, and the frequent outbreaks of internet outrage over issues that ignite fury among progressives.

All of which, Williams says, is not only distorting the way we view politics but, over time, may be changing the way we think, making us less rational and more impulsive. “We’ve habituated ourselves into a perpetual cognitive style of outrage, by internalising the dynamics of the medium,” he says.

It is against this political backdrop that Williams argues the fixation in recent years with the surveillance state fictionalised by George Orwell may have been misplaced. It was another English science fiction writer, Aldous Huxley, who provided the more prescient observation when he warned that Orwellian-style coercion was less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.

Since the US election, Williams has explored another dimension to today’s brave new world. If the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?

“The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

“Will we be able to recognise it, if and when it happens?” Williams replies. “And if we can’t, then how do we know it hasn’t happened already?”

Image: Facebook’s headquarters in Menlo Park, California. The company’s famous ‘likes’ feature has been described by its creator as ‘bright dings of pseudo-pleasure’. Photograph: Bloomberg/Bloomberg via Getty Images

 

Wary of robots taking jobs, Hawaii toys with guaranteed pay

CBS News

HONOLULU — Driverless trucks. Factory robots. Delivery drones. Virtual personal assistants.

As technological innovations increasingly edge into the workplace, many people fear that robots and machines are destined to take jobs that human beings have held for decades–a trend that is already happening in stores and factories around the country. For many affected workers, retraining might be out of reach —unavailable, unaffordable or inadequate.

What then?

Enter the idea of a universal basic income, the notion that everyone should be able to receive a stream of income to live on, regardless of their employment or economic status.

It isn’t an idea that seems likely to gain traction nationally in the current political environment. But in some politically progressive corners of the country, including Hawaii and the San Francisco Bay area, the idea of distributing a guaranteed income has begun to gain support.

Over the past two decades, automation has reduced the need for workers, especially in such blue-collar sectors as manufacturing, warehousing and mining. Many of the jobs that remain demand higher education or advanced technological skills. It helps explain why just 55 percent of Americans with no more than a high school diploma are employed, down from 60 percent just before the Great Recession.

Hawaii state lawmakers have voted to explore the idea of a universal basic income in light of research suggesting that a majority of waiter, cook and building cleaning jobs — vital to Hawaii’s tourism-dependent economy — will eventually be replaced by machines. The crucial question of who would pay for the program has yet to be determined. But support for the idea has taken root.

“Our economy is changing far more rapidly than anybody’s expected,” said state Rep. Chris Lee, who introduced legislation to consider a guaranteed universal income.

Lee said he felt it’s important “to be sure that everybody will benefit from the technological revolution that we’re seeing to make sure no one’s left behind.”

Here are some questions and answers:

What is a universal basic income?

In a state or nation with universal basic income, every adult would receive a uniform fixed amount that would be deemed enough to meet basic needs. The idea gained some currency in the 1960s and 1970s, with proponents ranging from Martin Luther King Jr. to President Richard Nixon, who proposed a “negative income tax” similar to basic income. It failed to pass Congress.

Recently, some technology leaders have been breathing new life — and money — into the idea. Mark Zuckerberg, Elon Musk and others have promoted the idea as a way to address the potential loss of many transportation, manufacturing, retail and customer service jobs to automation and artificial intelligence.

Even some economists who welcome technological change to make workplaces more efficient note that the pace of innovation in coming years is likely to accelerate. Community colleges and retraining centers could find it difficult to keep up. Supporters of a universal basic income say the money would cushion the economic pain for the affected workers.

Where would the money come from?

In the long run, that would likely be decided by political leaders. For now, philanthropic organizations founded by technology entrepreneurs have begun putting money into pilot programs to provide basic income. The Economic Security Project, co-led by Facebook co-founder Chris Hughes and others, committed $10 million over two years to basic income projects.

A trial program in Kenya, led by the U.S. group GiveDirectly, is funded mainly funded by Google; the Omidyar Network started by eBay founder Pierre Omidyar; and GoodVentures, co-led by Facebook co-founder Dustin Moskovitz.

Providing a basic income in expensive countries like the United States would, of course, be far costlier.

Tom Yamachika, president of the Tax Foundation of Hawaii, a nonprofit dedicated to limited taxes and fairness, has estimated that if all Hawaii residents were given $10,000 annually, it would cost about $10 billion a year, which he says Hawaii can’t afford given its $20 billion in unfunded pension liabilities.

“Basic income is such a broad subject, it could encompass hundreds of different kinds of mechanisms to help families,” Lee said. “You don’t have to enact the entire thing in one massive program. You can take bits and pieces that make sense.”

Karl Widerquist, co-founder of the U.S. Basic Income Guarantee Network, an informal group that promotes the idea of a basic income, suggests that Hawaii could collect a property tax from hotels, businesses and residents that could be redistributed to residents.

“If people in Alaska deserve an oil dividend, why don’t the people of Hawaii deserve a beach dividend?” he asked.

Other proponents suggest replacing part of the nation’s web of social support programs with a universal basic income. In places like Finland, this possibility has gained the opposition of the country’s powerful trade unions.

Some, like Natalie Foster, co-chairwoman of the Economic Security Project, say they think that if universal income took off in the U.S., it would begin incrementally — perhaps by taxing carbon emissions and distributing the money as basic income, an idea explored in California and Washington D.C.

A study by the Roosevelt Institute, a left-leaning think tank, found that distributing a universal income by increasing the federal debt would expand the economy because of the stimulating effects of the additional cash.

Where does universal basic income exist now?

Not on a large scale in the United States. But the idea is being pursued in small trials overseas. The program that New York-based GiveDirectly has established in Kenya is distributing $22 a month to residents of a village for the next 12 years — roughly what residents need to buy essentials.

The group says one goal is to assess whether people will change their behavior if they know they will enjoy a guaranteed income for an extended time. GiveDirectly is distributing money to 100 people and plans to expand to 26,000 recipients once the group reaches its $30 million funding goal, said Paul Niehaus, a co-founder.

“We had someone say, ‘I used to work this job in Nairobi as a security guard because it was the only way I could pay for my kids’ education, but now that I have this basic income I can afford to move back and actually live with my family again,’ ” he said.

In Oakland, California, Y Combinator, a startup incubator, is giving about $1,500 a month to a handful of people selected randomly and will soon expand distribution to 100 recipients. It eventually plans to provide $1,000 monthly to 1,000 people and study how recipients spend their time and how their financial health and well-being are affected.

Finland is distributing money to 2,000 randomly selected people. It hopes to learn how it might adapt its social security system to a changing workplace, incentivize people to work and simplify the bureaucracy of benefits. Canada’s province of Ontario earlier this year launched a project to study the effects of universal basic income in three cities.

In India, which is also considering distributing a universal basic income, the transportation minister has said the country would ban driverless cars because they would imperil people’s jobs.

What about in the United States?

Republican-leaning Alaska has long distributed revenue from oil extraction to its residents in payments ranging from about $1,000 to $2,000 annually.

A study commissioned by the Economic Security Project  found that 72 percent of Alaskans saved the money for essentials, emergencies, debt payments, retirement or education. Just 1 percent said that receiving the oil dividend had made them likely to work less.

“People are very supportive of the dividend,” Foster said. “They don’t see it as a handout; they see it as their right as an Alaskan to receive the income from the oil royalties.”

In Hawaii, a group of politicians, economists, social services providers, business and union representatives will meet in the fall to begin gathering data. They’ll examine Hawaii’s economy and its exposure to disruption and automation and how those trends could affect social safety nets, Lee said. After that, they’ll explore whether it makes sense to offer full or partial universal income.

“It could very well mean that it would be significantly cheaper to look at other options rather than let our existing services be overwhelmed by a changing economy,” Lee said.

What do critics say?

Aside from the cost, some detractors say they fear that distributing free money could diminish some people’s work ethic and productivity.

In Hawaii, which has one of the nation’s highest homelessness rates, some worry that basic income would attract unemployed people to move to the island.

“A lot of poor people move here anyway, because they don’t freeze,” Yamachika said. “This won’t help.”

Secretive Apple Tries to Open Up on Artificial Intelligence

Tripp Mickle
The Wall Street Journal

The battle for artificial-intelligence expertise is forcing Apple Inc.AAPL 0.03% to grapple with its famous penchant for secrecy, as tech companies seek to woo talent in a discipline known for its openness.

The technology giant this year has been trying to draw attention—but only so much—to its efforts to develop artificial intelligence, or AI, a term that generally describes software that enables computers to learn and improve functions on their own.

Apple launched a public blog in July to talk about its work, for example, and has allowed its researchers to speak at several conferences on artificial intelligence, including a TED Talk in April by Tom Gruber, co-creator of Apple’s Siri voice assistant, that was posted on YouTube last month. Continue reading

Just smile: In KFC China store, diners have new way to pay

Reuters

SHANGHAI (Reuters) – Diners at a KFC store in the eastern Chinese city of Hangzhou will have a new way to pay for their meal. Just smile.

Customers will be able to use a “Smile to Pay” facial recognition system at the tech-heavy, health-focused concept store, part of a drive by Yum China Holdings Inc to lure a younger generation of consumers.

Continue reading

DOWN THE ‘TUBEYouTube accused of CENSORSHIP over controversial new bid to ‘limit’ access to videos

Jasper Hamill
UK Sun

YOUTUBE has been accused of censorship after introducing a controversial new policy designed to reduce the audience for videos deemed to be “inappropriate or offensive to some audiences”.

The Google-owned video site is now putting videos into a “limited state” if they are deemed controversial enough to be considered objectionable, but not hateful, pornographic or violent enough to be banned altogether.

This policy was announced several months ago but has come into force in the past week, prompting anger among members of the YouTube community.

The Sun Online understands Google and YouTube staff refer to the tactic as “tougher treatment”.

One prominent video-maker slammed the new scheme whilst WikiLeaks founder Julian Assange described the measures as “economic censorship”.

However, YouTube sees it as a way of maintaining freedom of speech and allowing discussion of controversial issues without resorting to the wholesale banning of videos.

Videos which are put into a limited state cannot be embedded on other websites.

They also cannot be easily published on social media using the usual share buttons and other users cannot comment on them.

Crucially, the person who made the video will no longer receive any payment.

Earlier this week, Julian Assange wrote: “‘Controversial‘ but contract-legal videos [which break YouTube’s terms and conditions] cannot be liked, embedded or earn [money from advertising revenue].

“What’s interesting about the new method deployed is that it is a clear attempt at social engineering. It isn’t just turning off the ads.

“It’s turning off the comments, embeds, etc too.

“Everything possible to strangle the reach without deleting it.”

Criticism of YouTube‘s policies is most acute among people on the right of the political spectrum, who fear that Silicon Valley is dominated by the left and determined to silence opposing voices – a claim denied by tech giants like Facebook and Google.

The new YouTube rules were highlighted this week by Paul Joseph Watson, a globally famous British right wing YouTuber and editor-at-large of Infowars, who spoke out after saying a guest on his online show had one of her videos removed after the appearance.

The black female YouTuber, who uses the name RedPillBlack, made a video entitled “WTF? Black Lives Matter Has A List of Demands for White People!” in response to a member of the activist’s group calls for white people to “give up the home you own to a black or brown family“.

The video was part of a series which features an offensive racial term in its name, which we have decided not to publish, and criticises the BLM member’s statement point by point.

We watched her video and whilst it’s clear that many people might disagree with the political point she is making, the actual video did not appear to be offensive or gratuitous.

“Some people might watch the video and think I’m speaking out against black people,” she said in the video.

“But what I’m doing here is speaking up for black people.”

The video was allegedly banned but later reinstated following a series of tweets from Watson, which you can see below.

Read More

Putin: Leader in artificial intelligence will rule world

AP via Houston Chronicle

MOSCOW (AP) — Russian President Vladimir Putin says that whoever reaches a breakthrough in developing artificial intelligence will come to dominate the world.

Putin, speaking Friday at a meeting with students, said the development of AI raises “colossal opportunities and threats that are difficult to predict now.”

He warned that “the one who becomes the leader in this sphere will be the ruler of the world.”

 

Putin warned that “it would be strongly undesirable if someone wins a monopolist position” and promised that Russia would be ready to share its know-how in artificial intelligence with other nations.

The Russian leader predicted that future wars will be fought by drones, and “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”

Netflix’s ‘Wormwood’ Spotlights CIA’s Secret LSD Mind Control Experiments

Victoria Kim
The Fix

The upcoming Netflix docudrama dives deep into the conspiracy theory about the CIA’s attempt to develop tools for mind control.

The CIA’s mind control experiments from the 1950s and 1960s—known as MK-ULTRA—are the subject of a new Netflix series that revisits the epic conspiracy theory.

Wormwood is part documentary, part drama. Academy Award-winning director Errol Morris weaves in dramatic reenactments with real-life interviews. One person of particular interest is Eric Olson, the son of Dr. Frank Olson, known as the CIA biochemist who died after falling 10 stories from a New York City hotel room in 1953. Though his death was ruled a suicide, his family and others believe that he was assassinated by the CIA. 

It’s no longer a secret that the agency oversaw hundreds of mind control experiments during the height of the Cold War—fueled by fears that Soviet, Chinese and North Korean agents were brainwashing American prisoners of war.

Continue reading