<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:introParagraphLimit="2"

	>
<channel>
	<title>
	Comments on: I Always Feel Like Somebody&#8217;s Watching Me	</title>
	<atom:link href="https://cyber.jotwell.com/i-always-feel-like-somebodys-watching-me/feed/" rel="self" type="application/rss+xml" />
	<link>https://cyber.jotwell.com/i-always-feel-like-somebodys-watching-me/</link>
	<description>The Journal of Things We Like (Lots)</description>
	<lastBuildDate>Thu, 07 Nov 2013 19:51:53 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>
		By: Jason Treit		</title>
		<link>https://cyber.jotwell.com/i-always-feel-like-somebodys-watching-me/#comment-420</link>

		<dc:creator><![CDATA[Jason Treit]]></dc:creator>
		<pubDate>Thu, 20 May 2010 22:05:53 +0000</pubDate>
		<guid isPermaLink="false">https://cyberjotwell.dewjbxx2-liquidwebsites.com/?p=148#comment-420</guid>

					<description><![CDATA[Ryan,

I regret not at least skimming &quot;People Can Be So Fake&quot; before going after one of its conclusions. Reminder to self: RTFA. The literature in there suggesting &quot;our brains &#039;rarely make distinctions between speaking to a machine and speaking to a person&#039; at a viseral level&quot; indeed gives me pause, and reveals to me I&#039;ve wrongly extended a mental model from limited past observation to the future promises and dangers of anthropomorphic design.

Here&#039;s the thing, though: most of today&#039;s anthropomorphic designs are as well-intended as the pair of eyes over a privacy policy, and each turns quickly into a nag that hinders our awareness of choices and their meaning. Think of touchscreen mall directories. A 3D hostess talks you through your search, offers hints along the way, compliments your choices, and gestures at the mall map like an air hostess doing a seatbelt demonstration.

When I encounter one of these directories, it gives me visceral pause. I feel intruded upon. But this presence does not have the effect of bracing me for the intrusive, noisy, public, social nature of retail, nor does it rekindle my awareness of other presences in the mall besides those I can see. Maybe my sense of recoil could be taken as a subliminal placeholder for broader misgivings, but that&#039;s a stretch: what I am recoiling at is the machine. A machine that interrupts to make sure I understand what a map is instead of serving me a (better) map.

That lost distinction is what prompted snark in my first reply. It&#039;s not that ordinary people don&#039;t know what a contract is. It&#039;s that contracts are full of boring, evasive, meaningless words. Visceral human presences in the mediation of unclear contracts do not touch the problem of meaningless consent.

I&#039;m curious, Ryan, what you think of Google&#039;s unofficial position on &lt;a HRef=&quot;http://www.law.ed.ac.uk/ahrc/script-ed/vol7-1/lundblad.asp&quot; rel=&quot;nofollow&quot;&gt;opt-in dystopias&lt;/A&gt;. This passage speaks my worry: &quot;Once consumers are desensitised to opt-in requests and the sequence of interactions required to constitute opting-in, the actual scope can start growing without much awareness on the part of the user.&quot; Whether or not a pair of eyes brings out a lasting inhibitory response is trivial if all they do is invade our thinking space and raise the volume on a &quot;someone&#039;s watching&quot; sensation that may go dull in the presence of machines and animals for adaptive purposes we aren&#039;t even cognizant of.

And I guess by &quot;humane&quot; design I really mean transparent and unobtrusive. Much in the &lt;a HRef=&quot;http://worrydream.com/MagicInk/&quot; rel=&quot;nofollow&quot;&gt;&quot;Magic Ink&quot;&lt;/A&gt; vein. Perhaps I&#039;ll extend humane to allow cognitive interruptions that presage hidden risks, if the doses are measured right, like the ones the body might supply when we step through a door. But my design biases, which you anticipate and deal with in the paper, almost prevented me from learning something new. So thanks for the reply.]]></description>
			<content:encoded><![CDATA[<p>Ryan,</p>
<p>I regret not at least skimming &#8220;People Can Be So Fake&#8221; before going after one of its conclusions. Reminder to self: RTFA. The literature in there suggesting &#8220;our brains &#8216;rarely make distinctions between speaking to a machine and speaking to a person&#8217; at a viseral level&#8221; indeed gives me pause, and reveals to me I&#8217;ve wrongly extended a mental model from limited past observation to the future promises and dangers of anthropomorphic design.</p>
<p>Here&#8217;s the thing, though: most of today&#8217;s anthropomorphic designs are as well-intended as the pair of eyes over a privacy policy, and each turns quickly into a nag that hinders our awareness of choices and their meaning. Think of touchscreen mall directories. A 3D hostess talks you through your search, offers hints along the way, compliments your choices, and gestures at the mall map like an air hostess doing a seatbelt demonstration.</p>
<p>When I encounter one of these directories, it gives me visceral pause. I feel intruded upon. But this presence does not have the effect of bracing me for the intrusive, noisy, public, social nature of retail, nor does it rekindle my awareness of other presences in the mall besides those I can see. Maybe my sense of recoil could be taken as a subliminal placeholder for broader misgivings, but that&#8217;s a stretch: what I am recoiling at is the machine. A machine that interrupts to make sure I understand what a map is instead of serving me a (better) map.</p>
<p>That lost distinction is what prompted snark in my first reply. It&#8217;s not that ordinary people don&#8217;t know what a contract is. It&#8217;s that contracts are full of boring, evasive, meaningless words. Visceral human presences in the mediation of unclear contracts do not touch the problem of meaningless consent.</p>
<p>I&#8217;m curious, Ryan, what you think of Google&#8217;s unofficial position on <a HRef="http://www.law.ed.ac.uk/ahrc/script-ed/vol7-1/lundblad.asp" rel="nofollow">opt-in dystopias</a>. This passage speaks my worry: &#8220;Once consumers are desensitised to opt-in requests and the sequence of interactions required to constitute opting-in, the actual scope can start growing without much awareness on the part of the user.&#8221; Whether or not a pair of eyes brings out a lasting inhibitory response is trivial if all they do is invade our thinking space and raise the volume on a &#8220;someone&#8217;s watching&#8221; sensation that may go dull in the presence of machines and animals for adaptive purposes we aren&#8217;t even cognizant of.</p>
<p>And I guess by &#8220;humane&#8221; design I really mean transparent and unobtrusive. Much in the <a HRef="http://worrydream.com/MagicInk/" rel="nofollow">&#8220;Magic Ink&#8221;</a> vein. Perhaps I&#8217;ll extend humane to allow cognitive interruptions that presage hidden risks, if the doses are measured right, like the ones the body might supply when we step through a door. But my design biases, which you anticipate and deal with in the paper, almost prevented me from learning something new. So thanks for the reply.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Ryan Calo		</title>
		<link>https://cyber.jotwell.com/i-always-feel-like-somebodys-watching-me/#comment-416</link>

		<dc:creator><![CDATA[Ryan Calo]]></dc:creator>
		<pubDate>Thu, 20 May 2010 16:45:22 +0000</pubDate>
		<guid isPermaLink="false">https://cyberjotwell.dewjbxx2-liquidwebsites.com/?p=148#comment-416</guid>

					<description><![CDATA[Jason,

Thanks for your note, which Paul just brought to my attention.  

What evidence there is suggests that the effects will not wear off.  In one study, the effect of eyes on participants was about the same at 1 week as at 9.  Reeves and Nass have shown that computers as social actors (CASA) effects are just as pronounced on people very familiar with the technology.  It&#039;s ultimately an empirical question and more study is certainly welcome.  

I&#039;m not sure I know what a &quot;humane interface&quot; might be.  Greg Conti talks about &quot;malicious interfaces&quot;; perhaps this is the flip-side?  My own sense is that many people reflexively &quot;click through&quot; opt-in conditions.  And recent work out of Carnegie Mellon suggests that some privacy options risk giving users a false and potentially harmful sense of control.  But anyway: there&#039;s no reason we couldn&#039;t combine visceral notice with a user option (e.g., the user clicks the option to encrypt and the eyes disappear). 

I agree Aza&#039;s notion of privacy icons holds promise.  (You may have noticed that Aza lists me as an adviser on the project.)  

Thanks again.  Best,

Ryan]]></description>
			<content:encoded><![CDATA[<p>Jason,</p>
<p>Thanks for your note, which Paul just brought to my attention.  </p>
<p>What evidence there is suggests that the effects will not wear off.  In one study, the effect of eyes on participants was about the same at 1 week as at 9.  Reeves and Nass have shown that computers as social actors (CASA) effects are just as pronounced on people very familiar with the technology.  It&#8217;s ultimately an empirical question and more study is certainly welcome.  </p>
<p>I&#8217;m not sure I know what a &#8220;humane interface&#8221; might be.  Greg Conti talks about &#8220;malicious interfaces&#8221;; perhaps this is the flip-side?  My own sense is that many people reflexively &#8220;click through&#8221; opt-in conditions.  And recent work out of Carnegie Mellon suggests that some privacy options risk giving users a false and potentially harmful sense of control.  But anyway: there&#8217;s no reason we couldn&#8217;t combine visceral notice with a user option (e.g., the user clicks the option to encrypt and the eyes disappear). </p>
<p>I agree Aza&#8217;s notion of privacy icons holds promise.  (You may have noticed that Aza lists me as an adviser on the project.)  </p>
<p>Thanks again.  Best,</p>
<p>Ryan</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Jason Treit		</title>
		<link>https://cyber.jotwell.com/i-always-feel-like-somebodys-watching-me/#comment-415</link>

		<dc:creator><![CDATA[Jason Treit]]></dc:creator>
		<pubDate>Thu, 20 May 2010 08:59:27 +0000</pubDate>
		<guid isPermaLink="false">https://cyberjotwell.dewjbxx2-liquidwebsites.com/?p=148#comment-415</guid>

					<description><![CDATA[&quot;Perhaps rather than displaying only a traditional, text-laden privacy policy, Calo argues, websites should also include a picture of a pair of eyes above the text, or perhaps, I would add, lawmakers should force them to do so.&quot;

The appearance of eyes (why not charred lungs?) over text is visual noise we&#039;d desensitize to almost immediately, and if anything a step backward from more humane interfaces and plainly stated opt-in conditions.

Aza Raskin&#039;s idea of bolt-on &lt;a HRef=&quot;http://www.azarask.in/blog/post/is-a-creative-commons-for-privacy-possible/&quot; rel=&quot;nofollow&quot;&gt;privacy icons&lt;/A&gt; makes loads more sense.]]></description>
			<content:encoded><![CDATA[<p>&#8220;Perhaps rather than displaying only a traditional, text-laden privacy policy, Calo argues, websites should also include a picture of a pair of eyes above the text, or perhaps, I would add, lawmakers should force them to do so.&#8221;</p>
<p>The appearance of eyes (why not charred lungs?) over text is visual noise we&#8217;d desensitize to almost immediately, and if anything a step backward from more humane interfaces and plainly stated opt-in conditions.</p>
<p>Aza Raskin&#8217;s idea of bolt-on <a HRef="http://www.azarask.in/blog/post/is-a-creative-commons-for-privacy-possible/" rel="nofollow">privacy icons</a> makes loads more sense.</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
