When Am I a Hit on the Interwebz?

Reading an article today talking about how a video of child abuse on YouTube had become popular by going from 300 views to over 200,000 views in two days made me think of what is a hit on the Internet and what makes something popular technology? I am no social scientist nor do I really have anything to back up these claims but I would like to think these are decent ruminations on the matter.

It used to be the case that people would have to get featured on the local news to get famous or do something spectacular to get attention past the local church bulletins. Now, the Internet allows for every funny video of your kid being on heavy medication after having some mouth work done be a international hit. But that is all large scale.

What really makes someone “popular” on the internet? Is it 1,000 views? 50,000 views? 500,000 views? Or is it even views at all? To be honest, I am unsure what it is. Just because something becomes popular on Reddit or is talked about on CNN does not mean it has actually stuck. To be popular on the Internet, a thing has to reach a wide range of people and simply views and discussion is not necessarily enough. To be a hit on the internet, something has to be interesting but the most important part is being simple.

Today people have increasingly short attention spans. These shorter attention spans means for something to be memorable, people who are making content have to worry about making their point succinct and quick enough that people get that hit of awesome while still having plenty of information to not be lost or be disinterested in the potential internet sensation.

The Augmented Reality Problem

In lieu of our recent talk with Whurley from Chaotic Moon, I think instead of look at just one augmented reality app, I am more interested in examining the market and platform in general. The platform is inherently an interesting idea and could potentially revolutionize several fields. One field that was discussed was medicine and how augmented reality systems can be used as training devices. The platform can be very effective since it adds more layers of data on top of what is already in the world. The layered information is what is key. The extra layers give better depth and understanding to what already exists; they alter the level of accessibility of information and how it is displayed. Changing how things are displayed is a game changer (no pun intended) since suddenly now pertinent information can be displayed in real time in real space and on top of this the information can be rapidly changed with the environment that it is being displayed over. The problem with it, though, is that most interfaces for augmented reality are terrible. They require a camera to view the environment, a transparent display to view the information, and some way to manipulate the data and/or control the interface. Because of these major components (and, of course, things like microprocessors and miniaturized electronics), augmented reality devices are rare and exceedingly clunky. Even though there are obvious benefits to augmented reality, our current technological state (and apparently obsession with trying to break the laws of physics and optics) greatly limits what we would like to do with the technology.

The other hinderance to augmented reality is the market. Sure, one could easily argue the is a market for augmented reality servies; the growing number of smart phone users alone would provide an ample consumer base for the technology. People can usually get over having a crappy interface if they like what the software does. The problem with the market, though, is the fact that many companies behind the hardware and software are too busy trying to sue each other over software and hardware patents (more so software patents). The constant legal battles create a miserable market for someone trying to break into the market, let alone for someone who is already in the market trying to innovate. The level of innovation in the field of augmented reality is horrendously low; Whurley talked about asking leading researchers in the field of augmented reality  about the problems it faces and over the past decade, the answers to those same fundamental questions have been left unfulfilled and unchanged. The market is stifled and needs some kind of injection to get off the third floor (it seems less appropriate to say getting off the ground in this case since there is plenty of AR floating about now). With companies more worried about infringing on other patents, there is less incentive to make something new because everyone is patent-trolling everyone else and original ideas are quashed by filing a piece of paper first rather than making something functional and innovative to the field. Why on Earth is swiping something across the screen to unlock a device patentable? Wait, that is another story.

TL;DR Augmented reality has been slow to come to fruition due to technological constraints and a bunch of assholes in suits fighting over who gets which pennies.

 

ENDOFLINE;

Virtual Environments Response

Virtual environments or virtual worlds resemble the real world in several ways. They are governed by a specific sets of rules that we don’t always necessarily have a grasp of. They give users a certain level of autonomy that is constrained by the preset rules and/or the community. They also have a fair amount of risk involved when dealing inside them.

Risk, I think, is what most people overlook when it comes to the virtual world. People think that since the virtual world is on the internet then it must have less risk; people act like the internet removes risk from the equation when it is quite the opposite. One could argue that the internet involves more risk. More information available faster can mean a more volatile market and community.

Sure things like death are less of an issue (and in some cases a non-issue) in virtual worlds, but that does not preclude those worlds from being less risky than the real one. People expect less risk in virtual environments but that is when there is less fun and a lack of options. Replicating the risk of real life gives users the chance to fully explore the human condition and try things that one would not normally be able to.

A good example of this would be a player being a pirate in EVE Online. Normally, people can’t just go around and be a pirate all day as their desk job but virtual environments give people the chance to experience and think about more than life actively can present them. The risk is what makes it worth it because people feel more fulfilled when they complete something risky compared to completing something they know they won’t have a problem doing. Fiero can be a powerful thing.

The Tyranny and Revenge of the Faster Internet Connection

Information in the Digital Age is the currency and the Internet is the Treasury which prints it.THe problem, though, is distribution. We want more information, and we want “richer” information at that, so we need bigger pipes to deliver this information which takes up more and more bandwidth. So we get our information, so now what? We suddenly want more. I found something cool about cephalopods on Wikipedia so I am going to look up more about a particular kind of squid, which requires more bandwidth than the simple query on cephalopods, then maybe watch a crappy B-movie on Netlfix like Giant Octopus vs Megashark because all this talk of the sea has gotten me wanting some crazy sea creature battle. My friend Thomas sees all the fun I am having learning about cephalopods and wants to get in on the action. Bad news for him, though, is that the doesn’t have high speed internet. Seeing all this fun I am having with fast Internet makes him decide to get faster internet.

And here lies the catch-22: faster internet breeds more users which breeds slower average speeds which breeds more push for faster internet speeds. The internet feeds into itself. The content that lies within it keeps people going back for more, and wanting more, and contributing more. More, more, more. Today we are only as good as how fast we can get our information. To get that information, we are locked into a certain internet speed. Being locked into that speed for long breeds contempt in users since they see other users adding more content and making the internet experience richer. But if I can’t load the page on cephalopods or watch bad movies on Netflix at a reasonable pace, then I am less happy and want faster internet.

This faster internet and richer content that we as producers, designers, and generators provide feeds back into the great system that is the internet and puts more demand on the system. What we run into the problem is that faster internet speeds also equate with bloated files, sites, and thoughts. We have the idea of cramming every bit with as much information as possible. Sometimes to do that we execute it well and avoid clutter and disorganization. Sadly, though, most of the time we just run into slapping more information on a page and say it is done. We forgot those with slower internet speeds still exist. Countries like those that are “developing” have the problem of slower speeds, which can lead to a high barrier for entry for those people entering the 21st century.

The tyranny of the fast internet connection is that now we are ruled by it and expect it to be there for us. The revenge is that the internet simply feeds into itself and creates a monster that we are unsure of how to tame other than throwing the glass of wine in our hands onto the raging grease fire. How do we avoid bottlenecks and the race for having more content? Simple: quit encouraging those sorts of behavior. Create websites that are just as streamlined. Have graphics that are optimized and not filled with extraneous information (i.e. crop down to what you need and size the image accordingly). Remember that there are still plenty of people who have internet connections so slow dial-up looks like a T1 connection. We shouldn’t be needlessly throwing away principles because we now have more room to fit in the new divan from Ikea. Our bandwidth doesn’t need to be clogged by just a handful of sites. We, as consumers and users of the internet, need to think about what we want to see in sites. Sure Youtube videos are cool but they are not always necessary on a blog. Bandwidth being choked by more people is a good thing because that means more people are potentially learning something or keeping up with an old friend or staying current on the news. Bandwidth being choked by designers is never a good thing because not only does it make everyone look bad but also shows our lack of restraint. We become more worried about filling up the space when we don’t even know how big the space is, yet we still do because we can. Maximizing one features almost always minimizes another that could have been good.

Technology Acceptance and Why Your Grandmother Doesn’t Have a Smartphone

Thirty years ago, few would have guessed that today our phones would have the processing power of a small computer. At best, writers like Tom Clancy figured we would have a catch-all device that included GPS, phone, and connection to the internet (though to be fair, Tom Clancy also assumed we would be in virtual reality when dealing with the net) by now. But many of these future speculators guessed this technology would be ubiquitous and everyone would have it. But this is not the case. Virtually everyone has a cell phone, but even though many people have smart phones, not everyone does. Many people have e-book readers, but it is ultimately still a small market. We have this shift towards mobile computing, semantic web, social media, blah, blah, diddly, blah.

We have all these things but they are not unified. There is no default interface. Sure things carry over between iOS and Android, but they act like two different beasts. Regular scrolling, inverse scrolling, horizontal scrolling, and tilt scrolling all are on one kind of OS or another. Even as more and more gadgets become ubiquitous, we will get more and more “unique” ways to interact with those gadgets. But what we won’t get is unity. This lack of unity is what I think drives down a great portion of technology acceptance rates.

Sure, money is an issue. Not everyone can afford a $400 cell phone but they can afford it if it is free or even under $100. What many people fear, though, is how something works. Sure, I got a flashy new Porsche 911 GT2RS but if I don’t know how the launch control works, then I look like a fool trying to race someone at a stop light. Bells and whistles can be great but also equally overwhelming when put into the hands of someone who has not been exposed to such technology. The overwhelming part is what scares people away. They are already set in their ways doing something in a particular manner and are less likely to want to do something differently especially if it means interacting with something new and confusing. Your grandmother doesn’t have a smart phone not because she doesn’t like catching up with you or is not interested in playing Scrabble with you, it is because a smart phone represents an investment in money and time and would require new habits to be developed that might upset other well established habits.

So how do we get around this? We have to think about how we interact with things on a more base level. Why does it make sense for me to use multitouch and why does it boggle my grandmother’s mind? I was brought up thinking about these things (and a helpful dose of SciFi didn’t hurt) and could draw parallels to certain things that made interacting with things like an XBOX controller and touchscreen phone make sense. What we have to do is find an analog for others to understand. Some aboriginal from the Outback isn’t going to get that if he or she talks to the phone, Siri will answer back. We have to create interfaces and devices that communicate what they are doing and how they do it. Cellphones that resemble traditional phones are more likely to sell. Phones that look like some kind of dark Borg technology fail because people see that and don’t get it is a phone; they don’t recognize a parallel or the form as a phone and shun it.

We have to explain and design things in terms of the old for people to understand the new. Slowly we can alter the design and change the form of the new to slowly move away from old to get something that resembles something new yet I still understand what it is. If you gave the iPhone 4S or the Samsung Galaxy S II to someone 20 years ago, they might not get that those are phones but show the same phones to someone ten years ago and they would see the resemblance. We are more likely to accept that which is familiar so if we want to increase technology acceptance rates, then we need to look to the old to inform the new rather than constantly making clean breaks to establish a new order.

The Missing Regulation: A Lessig Response

So I am rather confused having read the first 5 chapters of Code 2.0 and chapters 7 and 8 of Remix. The two books do not seem to jive with one another. Sure, both books have an interesting focus on the digital economy but they seem to be asking different questions and coming to radically different conclusions. Before going much further I will admit that these books may coalesce better if I had read them completely, but as time and graduate school are, I do what I am asked and will hopefully go back and finish them some other time.

Anyway, in Code 2.0 Lessig is arguing for a level of regulation in the digital realm. Without regulation, he argues, we come to a economy and culture that is too wildly free and the anonymity of the Internet breaks down too many barriers for being effective agents. Essentially, what I take from Lessig is that regulation imparts a level of order that the Internet and other aspects of the digital age lack. Lessig is arguing for more centralization in an age where everything on the net seems to be decentralizing. Crowdsourcing and collaboration seem the ways to go and regulation does not always seem to fit into those categories. Sure, they need a set of rules to abide by, but as they say in Pirates of the Carribbean: Curse of the Black Pearl, “the Code is more what you’d call ‘guidelines.'”

Yet in Remix, it would seem that Lessig is now singing a different tune. Suddenly strict copyright and all this heavy regulation is weighing down the economy that the digital age is building. Those strict copyrights, or copylefts as many (including Lessig) like to call overly restrictive and prohibitive copyright, are closing off avenues of revenue and whole swathes of users and customers because people are turned off in the face of being sued for putting up the lyrics to “All the Single Ladies.” I don’t think Lessig wants to get rid of regulation in Remix, but it seems that what he was looking for before no longer is what drives the economy but rather hinders it.

I think what Lessig is seeing is that the older ways are starting to fade. Though it is a bit late if you ask me, people are finally beginning to see the massive flaws in the Digital Millenium Copyright Act (DMCA). Such restrictive measures are keeping people away from the online community which denies value both to the community and to those who profit off the community. There is a much needed shift towards hybrid markets, ones that value money but also value the community and the value the community can impart on the product. A prime example of this is video games. Developers who release free SDKs (Software Development Kits) tend to have a better following because they give the power to the users to create more content. The users, in turn, add more value to the game by adding more features or gametypes which the developer may not have thought of or had the resources to devote to. People marvel at the video game industry and its meteoric rise but it just follows the idea of a hybrid model that Lessig elucidates.

Bottom line? I think Lessig sees the value of what regulation can do, but also recognizes that regulation can, and will, run amuck when profit is the only goal. This is why we need to focus on the community to help reinvest value into the digital economy rather than using the community merely as a means to an end of achieving quotas.

ENDOFLINE;

Starting a Fire with Silk

As per my last blog post, talking about the new Facebook’s “Frictionless Sharing,” I have decided to continue my mini-rant on privacy to the new soon-to-be-a-debacle Amazon Fire with its Silk browser. For those of you who missed it last week, Amazon has released their own tablet, called the Amazon Fire, to seemingly rival the iPad. Besides the poor name (which makes me wonder if they actually have to set the Amazon ablaze to make it), Amazon is trying to shift how we do mobile computing. By using processing power and storage in the cloud, the Fire requires very little onboard hardware to operate.

To accomplish this feat, the Fire uses a custom Android ROM and what they are dubbing the Silk browser. Essentially, the Silk browser is Amazon’s own proprietary browser that uses the cloud to do most of the processing for the browser and helps pre-cache popular sites and pages. The only problem with it is that the Silk browser uses Amazon’s servers to do all of the input and output. So if you go to Google, they know. They will know everything that you do through their tablet. Everything you search, upload, and where you uploaded it to they will have a record of. With this knowledge, Amazon would be able to make more targeted ads at consumers and even improve algorithms for generating content that consumers might be interested in. For some, this does not bother them since they are either guarded enough about what they do online or they simply don’t think or care about the matter.

What I see it as is another step into the digital realm that people are making where their privacy is compromised, willingly or not. Companies are inching closer and closer to having products that know virtually everything about the user and transmit that back to the company. In many cases, this transmission of data is completely lawful since it is included in the EULA or some other kind of contract that is required to use the device. First, I am surprised at how little this is frightens people, though to be fair the Fire has only just recently announced so not many people have noticed much affect by it.

Second, I am more curious how we as designers and producers should look at this. Though we design products and software to make it easier for people to carry and read multiple books and websites all in a single, small digital device, we don’t always keep in mind the ramifications of what we are creating. How can this flash infographic I am making be interpreted? Will it change someone’s mind? Will it enforce certain stereotypes or ideas? How would it be considered from a different point of view? Is it representing the information fairly? Questions like these need to be considered when creating digital (and to be honest physical) artifacts for the public consumption. There seems to be a lack of care in the current industry towards thinking about the end result of the product rather than the ends that the product can give us.

The Facebook Friction

We almost all use Facebook now. It is ubiquitous and can be seen in car commercials, banner adds, on your phone, and even in comic books. It has become so ubiquitous that Facebook as unparalleled power over those who use it. They can control whether or not your Facebook Profile is allowed, where your information is being sent or sold to, and can even start influencing you about where you should start shopping. All of this is fine and dandy since I am willingly giving them my information when I create my profile. It is also fine that some people use Facebook for their social calendar par excellence. Whatever you do on your Facebook is your prerogative but what Facebook does should be our concern.

Recently at the f8 conference Mark Zuckerberg announced a new feature with Facebook called “frictionless sharing.” The basic idea is that by going to certain sites and using certain apps and programs, Facebook would be able to share with your friends what you are doing, where, and when. This all happens without you having to manually tell people that you are doing it. This really is nothing new. For a year or two, Facebook has integrated preferred affiliate sharing where certain sites could link with your Facebook account and start sharing what you are doing on their sites with your Facebook friends. The new kicker here is that even if you are logged out of the new Facebook, Facebook can still track what site you are going to though not necessarily share them with your friends. This just seems like a massive invasion of privacy, right?

Wrong. There is this old economist joke, “There is no such thing as a free lunch.” When nothing is being sold to you, you are the product being sold. With a combination of ad revenue and information sharing, Facebook makes plenty of money knowing that you are from Timbuktu, MIchigan.  So why wouldn’t Facebook want to surveil you more to gain the best chance of selling ads that you are more likely to click and respond to?

Exactly. This is not the first time Facebook has gotten into trouble with its users. Even iOS and Android users share the same problems with apps that require to look at a user’s contacts or be able to scan a user’s text messages or emails. Though this is a problem, most people don’t even realize it is happening and could actually care less as long as they can play Angry Birds and watch Hulu.

So my question is how do we, as developers and designers, produce systems that help mitigate the privacy concern for the users? There is the ability to refuse to sell our users’ information to others but anymore that is almost useless if someone does a bit of datamining. Another way is to offer robust opt-out functions that are easily accessible to the user and clearly labeled what a person is opting in and out of. Past those options, I think we can’t do much of anything anymore. Information is such a commodity that we have sacrificed privacy for immediacy and gratification. I give Facebook my information so I can play Mafia Wars (which I quit years ago) and schedule a party for a friend’s birthday (which Facebook knows I am doing and starts targeting birthday party related ads towards me). Facebook has become the social connector in 2011 like phones did in the 1950’s. The only difference is that back then I wasn’t worried about someone selling my conversation with Bethany to target billboards to us.

Wealth of Networks Response 1

If anything, the reading was able to answer only one question that I generated earlier in the week: how can we break the cycle of copyright begetting more copyrights. The answer is deceptively simple. It is not to get rid of copyrights nor is it to to make everything distributable by the whim of the creator. The trick to solving the problem lies in open-source methods and peer content generation. Open source works because it utilizes larger groups of users to work as developers on their own time to improve the software. By distributing the knowledge of how the software works, patches and upgrades can happen more quickly because suddenly there are more eyes pouring over code and brains troubleshooting problems. Copyright only serves the holder and hinders innovation since the copyright holder can choose to not let anyone create derivative works until the copyright is up. Also, something that Benkler does not expressly say but seems to me to be implicit in his argument is that open source is forever whereas exclusive copyright should be at best extremely limited in duration. This is to maintain the flow of innovation without having gaps or hiccups where innovation was controlled instead of being allowed to work freely and without structure.

It appears to me that what Benkler is really arguing here is for the distribution of knowledge across a wider user base than has been previously used or allowed. Benkler loves to come back words like “decentralized” and talk about markets with less structure; the more rigid the structure, the less chance there is for innovation and improvement from the original model. Decentralization is key because it distributes everything across a larger plane without sacrificing efficiency or a semblance of order. The saying goes “knowledge is power” so with more people having that knowledge, now they are more powerful to affect change in whatever they are involved in. McGonigal would have a field day with this because it supports the idea that having many people doing small things toward a larger positive goal can do the same amount of work as professionals in s smilar time period. There is no need for Big Brother to watch over us as warden if we are all doing what we want and need to. As McGonigal says, we are more likely to do something out of our own volition than if a boss tells us what to do; at times we are even more efficient being our own boss rather than having someone looming over us all the time. If everyone or even a small portion of the populace has the knowledge then there is a greater network of people who can work on that particular subject matter. Suddenly there are armchair-experts in the field of biomechanics because a company has suddenly decided to make all ove their information open source. Iterative plans, designs, and/or ideas can be generated quicker and more efficiently with a networked group of people than a couple of eggheads in a cubicle.

TL;DR We need to decentralize and distribute knowledge to foster innovation because centralization of information only stagnates the market without giving way to bigger and better things.

The Curious Life of Flash

Flash has always been the kind of bastard child of the media world. Some people love it, some people absolutely hate it. It flirts between being a programmer-heavy application and something that can make internet motion-graphics worthy of awe. Flash also is usually one of the most insecure programs on the Internet and can easily lock up and crash without much warning. It is a fickle child born from Macromedia and eventually loved more by Adobe.

But even with the strengths that Flash has, many companies would rather not use it or even deal with it if possible. Apple has been staunchly against the use of Flash, even going so far as to completely denounce Flash in an open letter saying that it is not compatible with the future of technology like touch devices and operating on open principles. Even with this major obstacle, Flash has still marched forward. Whether it was posturing on Adobe’s part or Apple’s giving in to their user-base, Apple finally has relented to allow some flash on to the platform. Even though it is just movies, this is a big step for Adobe. As we all know from working with flash, you could make a game inside a movie clip… :)

Regardless of Apple giving in, it is interesting that now Microsoft, who has been a big lover of Flash, is opting out of the use of Flash in Windows 8. The current release clients of Windows 8 feature the Metro UI as the principle way to navigate on the computer. The Metro UI has Internet Explorer built-in (which could be the subject of other rants from me) but it will not be able to run any kind of plug-ins. For Flash to run on IE, you have to have the plug-in installed. To run Flash, users will have to switch to the Desktop UI to be able to view anything with Flash. Well, this seems rather backward, don’t you think? It is also interesting that Microsoft who has generally not hated Flash has now decided to throw it by the wayside the instant Apple gives a bit of room for Flash to grow on the iOS platform.

So really, what does this all come down to? Is it just developers being stubborn and OS developers wanting you to use their tools and nobody else’s? Is it Adobe just holding onto antiquated software? I think it is a combination of all of these. Adobe and Apple have both been known to force people into corners with their technology and software (FCP X not being compatible with anything but the newest version of Motion, Adobe makes it hard to transfer a key from computer to computer or OS to OS). Flash very well could be outmoded by HTML5 and CSS3 but really only time will tell. Completely abandoning something that is flawed but works is never a good call. NASA spent thousands of dollars to invent a pen that could be used in space whereas the Soviets used pencils.

The same time and effort that is being spent on saying HTML5 and CSS3 is great still hasn’t come to any fruition. Yes it all looks great but there still isn’t the official standard. There still is not full compatibility with them. I am not saying we should not start learning HTML5 and CSS3 but we should also not stop learning Flash. HTML5 and CSS3 have yet to really prove themselves and show that they are on the whole better than Flash. I am sure within days of HTML5 becoming standard, someone will be able to break it. That is just how the internet works. The security argument with Flash can be made with anything so that is moot.

So why are we in such a big hurry to rid ourselves of Flash? It works. It functions. It does what we need it to do. It is kind of like that Ford truck from the 50′s that your grandfather has. If you keep it well maintained, it will keep running until the world runs out of oil. I doubt that Flash will ever truly die unless Adobe lets it die. If you ask me, Flash should still evolve with HTML5 and CSS3 to become competitive. Flash should be a tool in everyone’s web tool box. Whether or not I get out that particular hammer for every job is a different than having it for when I need it or would rather work with it.

/rant
ENDOFLINE.