DIR Return Create A Forum - Home
---------------------------------------------------------
Renewable Revolution
HTML https://renewablerevolution.createaforum.com
---------------------------------------------------------
*****************************************************
DIR Return to: New Inventions
*****************************************************
#Post#: 14110--------------------------------------------------
AI and its Implications
By: Surly1 Date: October 24, 2019, 5:52 am
---------------------------------------------------------
Google Is Coming for Your Face
HTML https://www.thenation.com/article/immigrant-dna-data/
Personal data is routinely harvested from the most vulnerable
populations, without transparency, regulation, or principles—and
this should concern us all.
By Malka Older
[img
width=800]
HTML https://www.thenation.com/wp-content/uploads/2019/10/google-facial-recognition-ap-img.jpg?scale=896&compress=80[/img]
[html]<p><span>L</span><span>Last week, </span><em>The New York
Times </em><a
href="
HTML https://www.nytimes.com/2019/10/02/us/dna-testing-immigrants.html?smid=nytcore-ios-share">reported</a><span><br
/>on the federal government’s plans to collect DNA samples
from people in immigration custody, including asylum seekers.
This is an infringement of civil rights and privacy, and opens
the door to further misuse of data in the long term. There is no
reason for people in custody to consent to this collection of
personal data. Nor is there any clarity on the limits on how
this data may be used in the future. The DNA samples will go
into the FBI’s criminal database, even though requesting
asylum is not a crime and entering the country illegally is only
a misdemeanor. That makes the practice not only an invasion of
privacy in the present but also potentially a way to skew
statistics and arguments in debates over immigration in the
future.</span></p> <section> <div> <p>The collection
of immigrant DNA is not an isolated policy. All around the
world, personal data is harvested from the most vulnerable
populations, without transparency, regulation, or principles.
It’s a pattern we should all be concerned about, because
it continues right up to the user agreements we click on again
and again.</p> <p>In February, the World Food Program (WFP)
<a
href="
HTML https://www.wfp.org/news/palantir-and-wfp-partner-help-transform-global-humanitarian-delivery">announced</a><br
/>a five-year partnership with the data analytics company Palant
ir
Technologies. While the WFP claimed that this partnership would
help make emergency assistance to refugees and other
food-insecure populations more efficient, it was broadly
criticized within the international aid community for potential
infringement of privacy. A group of researchers and data-focused
organizations, including the Engine Room, the AI Now Institute,
and DataKind, sent an <a
href="
HTML https://responsibledata.io/2019/02/08/open-letter-to-wfp-re-palantir-agreement/">open<br
/>letter</a> to the WFP, expressing their concerns over the lack
of transparency in the agreement and the potential for
de-anonymization, bias, violation of rights, and undermining of
humanitarian principles, among other issues.</p> <p>Many
humanitarian agencies are struggling with how to integrate
modern data collection and analysis into their work.
Improvements in data technology offer the potential to improve
processes and ease the challenges of working in chaotic, largely
informal environments (as well as appealing to donors), but they
also raise risks in terms of privacy, exposure, and the
necessity of partnering with private-sector companies that may
wish to profit from access to that data.</p> <div><a
href="
HTML https://www.thenation.com/wp-content/uploads/2019/10/facial-recognition-makeup-ap-img.jpg"<br
/>title="A man has his face painted to represent efforts to defe
at
facial recognition during a 2018 protest at Amazon’s
headquarters over the company’s contracts with Palantir.
(AP Photo / Elaine Thompson)" alt=""><img
src="
HTML https://www.thenation.com/wp-content/uploads/2019/10/facial-recognition-makeup-ap-img.jpg"<br
/>alt="" title="facial-recognition-makeup-ap-img" /></a> <p>
A
man has his face painted to represent efforts to defeat facial
recognition during a 2018 protest at Amazon’s headquarters
over the company’s contracts with Palantir. <span>(AP
Photo / Elaine Thompson)</span></p> </div> <p>In August,
for example, the United Nations High Commissioner for Refugees
<a
href="
HTML https://www.unhcr.org/en-us/news/briefing/2019/8/5d4d24cf4/half-million-rohingya-refugees-receive-identity-documents-first-time.html">trumpeted<br
/></a>its achievement in providing biometric identity cards to
Rohingya refugees from Myanmar in Bangladesh. What wasn’t
celebrated was the fact that refugees <a
href="
HTML https://www.refworld.org/docid/5c2cc3b011.html">protested</a><br
/>the cards both because of the way their identities were
defined—the cards did not allow the option of identifying
as Rohingya, calling them only “<a
href="
HTML https://www.yahoo.com/news/race-row-hampers-rohingya-registration-bangladesh-103106620.html">Myanmar<br
/>nationals</a>”—and out of concern that the biometr
ic
data might be shared with Myanmar on repatriation, raising
echoes of the role ethnically marked identity cards played in
the<a
href="
HTML http://www.genocidewatch.org/images/AboutGen_Group_Classification_on_National_ID_Cards.pdf"><br
/>Rwandan genocide</a>, among others. Writing about the Rohingya
biometrics collection in the journal <em>Social Media +
Society</em>, Mirca Madianou <a
href="
HTML https://journals.sagepub.com/doi/full/10.1177/2056305119863146">describes</a><br
/>these initiatives as a kind of “techno-colonialism&rdquo
;
in which “digital innovation and data practices reproduce
the power asymmetries of humanitarianism, and…become
constitutive of humanitarian crises
themselves.”</p> <p>Unprincipled data collection is
not limited to refugee populations. The New York <em>Daily
News</em><a
href="
HTML https://www.nydailynews.com/news/national/ny-google-darker-skin-tones-facial-recognition-pixel-20191002-5vxpgowknffnvbmy5eg7epsf34-story.html">reported</a><br
/>on Wednesday that Google has been using temporary employees,
paid through a third party, to collect facial scans of
dark-skinned people in an attempt to better balance its facial
recognition database. According to the article, temporary
workers were told “to go after people of color, conceal
the fact that people’s faces were being recorded and even
lie to maximize their data collections.” Target
populations included homeless people and students. They were
offered a five-dollar gift card (which is more than refugees and
immigrant detainees get for their data) but, critically, were
never informed about how the facial scans would be used, stored,
or, apparently, collected.</p> <p>A Google spokesperson told
the <em>Daily News</em> that the data was being collected
“to build fairness into Pixel 4’s face unlock
feature” in the interests of “building an inclusive
product.” Leaving aside whether contributing to the
technology of a reportedly <a
href="
HTML https://www.tomsguide.com/news/google-pixel-4">$900</a><br
/>phone is worthwhile for a homeless person, the collection of
this data without formal consent or legal agreements leaves it
open to being used for any number of other purposes, such as the
policing of the homeless people who contributed
it.</p> <p>For governments, coerced data collection
represents a way of making these chaotic populations visible,
and therefore, in theory, controllable. These are also groups
with very little recourse for rejecting data collection,
offering states the opportunity to test out technologies of the
future, like biometric identity cards, that might eventually
become nationwide initiatives. For the private firms inevitably
involved in implementing the complexities of data collection and
management, these groups represent untapped value to
surveillance capitalism, a term coined by <a
href="
HTML https://en.wikipedia.org/wiki/Surveillance_capitalism">Shoshana<br
/>Zuboff</a> to refer to the way corporations extract profit fro
m
data analysis; for example, by tracking behavior on Facebook or
in Google searches to present targeted advertisements. In
general, refugees, asylum seekers, and homeless people give
companies far less data than the rest of us, meaning that there
is still information to extract from them, compile, and sell for
profits that the contributors of the data will never
see.</p> <p>One concern with this kind of unethical data
sourcing means information collected for one stated goal may be
used for another: In a recent <a
href="
HTML https://www.nytimes.com/2019/10/02/magazine/ice-surveillance-deportation.html?smid=tw-nytimes&smtyp=cur"><em>New<br
/>York Times Magazine</em> article</a>, McKenzie Funk details ho
w
data analytics developed during the previous administration to
triage targeting toward “felons, not families” are
now being used to track all immigrants, regardless of criminal
status. Another issue is how the data is stored and protected,
and how it might be misused by other actors in the case of a
breach. A major concern for the Rohingya refugees was what might
happen to them if their biometric data fell into the hands of
the very groups that attacked them for their
identity.</p> <p>Both of these concerns should sound
familiar to all of us. It seems like we hear about new data
breaches on a daily basis, offering up the medical records,
Social Security numbers, and shopping history of millions of
customers to hackers and scammers. But even without
insecurities, our data is routinely vacuumed up through our cell
phones, browsers, and interactions with state bureaucracy (e.g.,
driver’s licenses)—and <a
href="
HTML https://irishtechnews.ie/5-ways-big-data-gets-misused/">misused</a><br
/>in immoral, illegal, or dangerous ways. Facebook has been forc
ed
to <a
href="
HTML https://www.nytimes.com/2018/04/11/technology/facebook-privacy-hearings.html">admit</a>again<br
/>and again that it has been sharing the detailed information it
gets from tracking its users with third parties, ranging from
apps to advertisers to firms attempting to influence the
political sphere, like Cambridge Analytica. Apple has been<a
href="
HTML https://www.cnet.com/news/apple-sued-by-itunes-customers-over-alleged-data-misuse/"><br
/>accused </a>of similar misuse.</p> <p>Refugees or detained
asylum seekers have less choice than most people to opt out of
certain terms of service. But these coercive mechanisms affect
us all. Getting a five-dollar gift card (not even cash!) may
seem like a low price for which to sell a scan of your face, but
it isn’t so different from what happens when we willingly
click “I Agree” on those terms-of-service boxes.
Even if we’re wary of the way our data is being used,
it’s getting harder and harder to avoid giving it out. As
our digital identities become increasingly entangled with
functions like credit reporting, paying bills, and buying
insurance, avoiding the big tech companies becomes more and more
difficult. But when we opt in, we do so on the company’s
terms—not our own. <a
href="
HTML https://onezero.medium.com/user-agreements-are-betraying-you-19db7135441f">User<br
/>agreements</a> and <a
href="
HTML https://www.nytimes.com/interactive/2019/06/12/opinion/facebook-google-privacy-policies.html">privacy<br
/>policie</a>s are notoriously difficult for even experts to
understand, and a new <a
href="
HTML https://www.pewinternet.org/2019/10/09/americans-and-digital-knowledge/">Pew<br
/>Research study</a> showed that most US citizens are short on
digital knowledge and particularly lacking in understanding of
privacy and cybersecurity.</p> <p>Like the subjects of
Google’s unethical facial scans and the recipients of
biometric identity cards in refugee camps, we have little
control over how the data is used once we’ve given it up,
and no meaningful metric for deciding when giving up our
information becomes a worthwhile trade-off. We should be shocked
by how companies and governments are abusing the data and
privacy rights of the most vulnerable groups and individuals.
But we should also recognize that it’s not so different
from the compromises we are all routinely asked to make
ourselves.</p> </div> </section> <section> <div>
</div> </section> <footer> <div> <p><strong>Dr.
Malka Older is an affiliated research fellow at the Centre for
the Sociology of Organizations at Sciences Po and the author of
an acclaimed trilogy of science-fiction political thrillers
starting with <em>Infomocracy</em>. Her new collection,
…<em>and Other Disasters</em>, comes out November
16.</strong></p> </div> </footer>[/html]
#Post#: 14111--------------------------------------------------
Algorithms Are Designed to Addict Us, and the Consequences Go Be
yond Wasted Time
By: Surly1 Date: October 24, 2019, 6:11 am
---------------------------------------------------------
Algorithms Are Designed to Addict Us, and the Consequences Go
Beyond Wasted Time.
HTML https://singularityhub.com/2019/10/17/youtubes-algorithm-wants-to-keep-you-watching-and-thats-a-problem/
[img
width=600]
HTML https://singularityhub.com/wp-content/uploads/2019/10/1024px-YouTube_Ruby_Play_Button_2.svg_.png[/img]
Thomas Hornigold
[html]<p>Goethe’s <i>The Sorcerer’s Apprentice</i>
is a classic example of many stories in a similar theme. The
young apprentice enchants a broom to mop the floor, avoiding
some work in the process. But the enchantment quickly spirals
out of control: the broom, mono-maniacally focused on its task
but unconscious of the consequences, ends up flooding the
room.</p> <p>The classic fear surrounding hypothetical,
superintelligent AI is that we might give it the wrong goal, or
insufficient constraints. Even in the well-developed field of
narrow AI, we see that machine learning algorithms are very
capable of finding unexpected means and unintended ways to
achieve their goals. For example, let loose in the structured
environment of video games, where a simple function—points
scored—is to be maximized, they often find <a
href="
HTML https://www.wired.com/story/when-bots-teach-themselves-to-cheat/">new<br
/>exploits</a> or <a
href="
HTML https://www.theverge.com/tldr/2018/2/28/17062338/ai-agent-atari-q-bert-cracked-bug-cheat">cheats</a><br
/>to win without playing.</p> <p>In some ways, YouTube&rsquo
;s
algorithm is an immensely complicated beast: it serves up
billions of recommendations a day. But its goals, at least
originally, were fairly simple: maximize the likelihood that the
user will click on a video, and the <a
href="
HTML https://techcrunch.com/2017/02/28/people-now-watch-1-billion-hours-of-youtube-per-day/">length<br
/>of time</a> they spend on YouTube. It has been stunningly
successful: 70 percent of time spent on YouTube is watching
recommended videos, amounting to 700 million hours a day. Every
day, humanity as a collective spends a thousand lifetimes
watching YouTube’s recommended videos.</p> <p>The
design of this algorithm, of course, is driven by
YouTube’s parent company, Alphabet, maximizing its own
goal: advertising revenue, and hence the profitability of the
company. Practically everything else that happens is a side
effect. The <a
href="
HTML https://towardsdatascience.com/how-youtube-recommends-videos-b6e003a5ab2f">neural<br
/>nets of YouTube’s algorithm</a> form
connections—statistical weightings that favor some
pathways over others—based on the colossal amount of data
that we all generate by using the site. It may seem an innocuous
or even sensible way to determine what people want to see; but
without oversight, the unintended consequences can be
nasty.</p> <p>Guillaume Chaslot, a former engineer at
YouTube, has helped to <a
href="
HTML https://thenextweb.com/google/2019/06/14/youtube-recommendations-toxic-algorithm-google-ai/">expose<br
/>some of these</a>. Speaking to <i>TheNextWeb</i>, he pointed
out, “The problem is that the AI isn’t built to help
you get what you want—it’s built to get you addicted
to YouTube. Recommendations were designed to waste your
time.”</p> <p>More than this: they can waste your time
in harmful ways. Inflammatory, conspiratorial content generates
clicks and engagement. If a small subset of users watches hours
upon hours of political or conspiracy-theory content, the
pathways in the neural net that recommend this content are
reinforced.</p> <p>The result is that users can begin with
innocuous searches for relatively mild content, and find
themselves quickly dragged <a
href="
HTML https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html">towards<br
/>extremist</a> or conspiratorial material. A survey of 30
attendees at a <a
href="
HTML https://www.theguardian.com/science/2019/feb/17/study-blames-youtube-for-rise-in-number-of-flat-earthers">Flat<br
/>Earth conference</a>showed that all but one originally came up
on
the Flat Earth conspiracy via YouTube, with the lone dissenter
exposed to the ideas from family members who were in turn
converted by YouTube.</p> <p>Many readers (and this writer)
know the experience of being sucked into a
“wormhole” of related videos and content when
browsing social media. But these wormholes can be extremely
dark. Recently, a “<a
href="
HTML https://www.youtube.com/watch?v=O13G5A5w5P0">p</a><a<br
/>href="
HTML https://www.youtube.com/watch?v=O13G5A5w5P0">edophile<br
/>wormhole</a>” on YouTube <a
href="
HTML https://techcrunch.com/2019/02/18/youtube-under-fire-for-recommending-videos-of-kids-with-inappropriate-comments/">was<br
/>discovered</a>, a recommendation network of videos of children
which was frequented by those who wanted to exploit children. In
<i>TechCrunch</i>’s investigation, it took only a few
recommendation clicks from a (somewhat raunchy) search for
adults in bikinis to reach this exploitative
content.</p> <p>It’s simple, really: as far as the
algorithm, with its one objective, is concerned, a user who
watches one factual and informative video about astronomy and
then goes on with their day is less advantageous than a user who
watches fifteen flat-earth conspiracy videos in a
row.</p> <p>In some ways, none of this is particularly new.
The algorithm is learning to exploit familiar flaws in the human
psyche to achieve its ends, just as other algorithms find flaws
in the code of 80s Atari games to score their own points.
Conspiratorial tabloid newspaper content is replaced with
clickbait videos on similar themes. Our short attention spans
are exploited by social media algorithms, rather than TV
advertising. Filter bubbles of opinion that once consisted of
hanging around with people you agreed with and reading
newspapers that reflected your own opinion are now reinforced by
algorithms.</p> <p>Any platform that reaches the size of the
social media giants is bound to be exploited by people with
exploitative, destructive, or irresponsible aims. It is equally
difficult to see how they can operate at this scale without
relying heavily on algorithms; even content moderation, which is
partially automated, can take a heavy toll on the human
moderators, required to <a
href="
HTML https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona">filter<br
/>the worst content imaginable</a>. Yet directing how the human
race spends a billion hours a day, often shaping people’s
beliefs in unexpected ways, is evidently a source of great
power.</p> <p>The answer given by social media companies
tends to be the same: <a
href="
HTML https://singularityhub.com/2019/06/30/can-ai-save-the-internet-from-fake-news/">better<br
/>AI.</a>These algorithms needn’t be blunt instruments.
Tweaks are possible. For example, an older version of
YouTube’s algorithm consistently recommended
“stale” content, simply because this had the most
viewing history to learn from. The developers fixed this by
including the age of the video as a
variable.</p> <p>Similarly, choosing to shift the focus from
click likelihood to time spent watching the video was aimed to
prevent low-quality videos with clickbait titles from being
recommended, leading to user dissatisfaction with the platform.
Recent updates aim to <a
href="
HTML https://www.wired.com/story/youtube-debuts-plan-to-promote-fund-authoritative-news/">prioritiz</a><a<br
/>href="
HTML https://www.wired.com/story/youtube-debuts-plan-to-promote-fund-authoritative-news/">e<br
/>news from reliable and authoritative sources</a>, and make the
algorithm more transparent by <a
href="
HTML https://www.theverge.com/2019/6/26/18759840/youtube-recommendation-videos-homepage-changes-algorithm-harmful-content">explaining<br
/>why recommendations were made</a>. Other potential tweaks coul
d
add more emphasis on whether users “like” videos, as
an indication of quality. And YouTube videos about topics prone
to conspiracy, such as global warming, now include links to
factual sources of information.</p> <p>The issue, however,
is sure to arise if this conflicts with the profitability of the
company in a large way. Take a recent tweak to the algorithm, <a
href="
HTML https://www.technologyreview.com/s/614432/youtube-algorithm-gets-more-addictive/">aimed<br
/>to reduce bias</a> in the recommendations based on the order
videos are recommended. Essentially, if you have to scroll down
further before clicking on a particular video, YouTube adds more
weight to that decision: the user is probably actively seeking
out content that’s more related to their target. A neat
idea, and one that improves user engagement by 0.24 percent,
translating to millions of dollars in revenue for
YouTube.</p> <p>If addictive content and engagement
wormholes are what’s profitable, will the algorithm change
the weight of its recommendations accordingly? What weights will
be applied to ethics, morality, and unintended consequences when
making these decisions?</p> <p>Here is the fundamental
tension involved when trying to deploy these large-scale
algorithms responsibly. Tech companies can tweak their
algorithms, and journalists can <a
href="
HTML https://singularityhub.com/2018/11/18/follow-the-data-investigative-journalism-in-the-age-of-algorithms/">probe<br
/>their behavior</a> and expose some of these unintended
consequences. But just as algorithms need to become more complex
and avoid prioritizing a single metric without considering the
consequences, companies must do the same.</p>[/html]
#Post#: 14114--------------------------------------------------
Re: AI and its Implications
By: AGelbert Date: October 24, 2019, 12:23 pm
---------------------------------------------------------
Yes, AI bots are programmed to appeal to and exploit the basest
instincts of humans in order to profit off of them, while
simultaneously preventing them from engaging in critical
thinking that would expose, precisely and in excruciating
detail, how the human targets of exploitation are being used and
abused. It is a demonically clever way to perpetuate the society
destroying, elite enriching, status quo. Chris Hedges calls it
"Electronic Hallucinations". He is right.
May God have mercy on us all and lead us away from those evil
bastards who's goal is to turn all of us into a herd of
unthinking animals, happily distracted by shiny objects, as we
follow the primrose path to perdition.
[center][img
width=640]
HTML https://ofcommonsense.files.wordpress.com/2015/07/free-choice-not-consequences.jpg[/img][/center]
*****************************************************