The creative process and UX

Added on by Mircea Botez.

This is part II of Switching gears and locking paths.  

Sometimes when figuring out solutions we go from A to D instinctively. The mental cookie you get from leapfrogging a few steps is an incredible drug. It's seductive, gives off an air of mystique; you pace along for 5 minutes and come down from the mountain with a solution. 

I did it in the past. It's a lottery. The team will never know whether it will take you 5 minutes or 30 to come up with the next solution. Skeptics (i.e. developers) will demand an explanation and the B and C points of the path. You need to convince your stakeholders and your team that D is where you need to go to but cannot quite articulate why. In the past I've used charisma, persuasion and authority to get over this. The solution is so obvious to you that you roll your eyes when your team/client/users dispute your suggestion. However, "Trust me, I'm a designer" can only go so far. 

People label "creatives" and keep them at a distance. Never anger them or they will attack with their arsenal of pencils. Clients use this as an excuse to never get involved in the process and hope that "creatives" will surprise them pleasantly. Some "creatives" use this to build an ivory tower around them. They feed the mystique because they are afraid they will lose their jobs once people find out how they do what they do. And so they gamble designs and jobs and then justify losing a customer with "they just had no taste, they couldn't appreciate my work". 

In the end both sides lose. I have a few theories why this happens. I reckon putting all of this together in a documented manner might fill a book or two. I'll take a shot at it. I think the biggest factor is education: minimisation of arts in classes. People are not aware that drawing is a language and a skill you can learn. Here is the truth about any creative pursuit:

Success is always commitment and hard work. Talent is just a shortcut. 

That is rather unpleasant to hear. People are more comfortable to say "I just don't have the talent" than "If I invest several hundred hours I might get to that level". 

"I know these are bad... " you say when you're showing your photos : create an account on Flickr. Go every weekend and browse the Explore tab for 2 to 4 hours. Favourite the ones you like. Do that for up to 40 hours or about 2-3 months. Note that I didn't ask you to take photos for 3 months. I only asked you to look at pretty pictures every other lazy Sunday for a few hours. After those 2-3 months, go take some photos. As if through a feat of magic, your compositions will have improved. People think learning photography is about wielding a DSLR and slaving over technicalities. In reality, it's about grabbing your cellphone or setting the DSLR on auto and learning to "see". Paying attention to details. Taking a step back. Ignore details and focus on an interesting subject. 

"I can't draw".  Take a look at the following images: 

Does the cylinder on the left look like something you'd draw?

Does the cylinder on the left look like something you'd draw?

These couldn't have been made by the same person, right? I mean, it's not like someone took 3 years and dedicated himself to learning to draw from scratch? Go ahead, take a look at his progress shots. But that's some guy from the internet, right? Let me put my money where my mouth is:

Left: 7 days after starting; Right: done at lunch 

Left: first color sketch in Photoshop. Right: 8 months later

I'm nowhere near where the guy above is, but seeing his progress has helped and inspired me (and others, no doubt) so much. 

In "The Art of Game Design: A book of lenses" Jesse Schell makes the distinction between the minor and the major gift:

"The minor gift is the innate gift.[...] game design, mathematics or playing the piano comes naturally to you. You can do it easily, almost without thinking. But you don't necessarily enjoy doing it. [...]

The major gift is love of the work.[...] How can love of using a skill be more important than the skill itself? If you have the major gift you will design using whatever limited skills you have. And you will keep doing it. And your love for the work will shine through, infusing your work with an indescribable glow that only comes from the love of doing it.  And through practice your skills will grow and become more powerful until eventually your skills will be as great or greater than those of someone who only has the minor gift. And people will say: "Wow. that is one truly gifted person" They will think you have the minor gift, of course, but only you will know the secret source of your skill, which is the major gift: love of the work."

There is only one way to find out if you have the major gift. Start down the path, and see if it makes your heart sing."

People look at my drawings and say "Wow, you're talented". They didn't have any chance to discover that one year before as none of my drawings existed then. That has been my path, tackled bit by bit every day. 

I've become convinced that there is a process to creativity. If we can describe quantum mechanics, surely creativity cannot be more complicated than that. Now, I'm not a cognitive scientist; I've certainly not studied enough on the subject to begin to approach defining this process. All I have are floating bits and pieces. 

An order of magnitude easier is to define a narrower scope. And this is where I want to bridge User Experience design with process. Thankfully I've matured past my youthful rebellion against paperwork and structure to recognise the benefits of a process oriented thinking. 

A clean, concise process explained to teams, customers, bosses helps ease people into collaboration and understanding. It also brings you, the creator, back on the path of creative flow. Listen to yourself and find out your process. This will bring you back on track when you think the "muse" has left you. If you listen carefully, you will also find out why it "left". 

Sharing this process will ease everyone around you. They will understand that it takes several tries and failed attempts and that it is normal. They will learn the need to explore a domain of solutions and not try to find only one "correct" answer. Clients will know that you require input from them and they'll prepare to be part of the process. 

The disconnect in people's heads when faced with a design-related activity is avoidable. You remove the lottery. Take out the guesswork. Dispel the mystique. Teach people the major gift and share your love of the craft. Don't hide behind the minor gift. 

You will discover newfound trust among people around you. And be recognised as a professional.

The software architecture time traveler

Added on by Mircea Botez.

Would you like to go back in time and tell your younger self what to do to succeed and what perils to avoid?

This is what Tom Gilb did for me today.

As time travelers usually go, he spoke in cryptic ways, poured a lot of knowledge and had to leave quickly. He left me with the glimmer of hope that the Software Industry can achieve a 100% success rate instead of the abysmal 20% of today.

The folks at Libero organized the Software Architecture Day where Tom & his son Kai Gilb put together an intense course packed with information.

Tom struck me as what I would call a software architecture scientist. He is among the few to work in IT since the 60's and has a scholarly approach and detachment. He worked for major companies and helped steer very large boats to clear waters. Despite this, both he and his son display a norwegian humility and openness coupled with the desire to spread their knowledge.

At a first glance what they presented seems to be just common sense. Focus on stakeholders and end users. Make sure requirements are clear and refined. Define clearly measurable quality attributes. Be concise in expressing architecture. Use Agile, but not as a silver bullet. Start delivering value in the first week. Reasess and learn during each iterative cycle... not only in development but in project management and architecture.

Sounds like bits and pieces of every other software development advice? The biggest "twist" is thinking about architecture in an engineering way. "Real" architects and civil engineers have long enjoyed the respect and trust from people of all walks of life. Houses, bridges and so on have been built for hundreds of years without fail. Yet we struggle to keep our systems up almost every day.

Parallels and highlights

"We're proud that Agile has reduced a 40% failure rate to 20% failure rate. In other words we're proud that we only kill one pedestrian at each crossing instead of two".

"One of the biggest failures is not filtering/sorting/correctly quantifying the stakeholder values BEFORE the team starts developing"

"The human body manages all kinds of systems and if one fails, we die; we should engineer our systems the same way".

Force yourself to put the architecture on one page. Tom referenced a Mozart anecdote, "There are just as many notes as there should be". I immediately thought of U2's guitarist Edge:

"Notes actually do mean something. They have power. I think of notes as being expensive. You don't just throw them around. I find the ones that do the best job and that's what I use.".

In software, a 200 pages document will never be read. Every architect should go through the exercise of expressing a complex system on one page. The bad version of this is "The more I write the wiser I will seem".

Each line in the architecture must address or refer to a quality attribute. I read last week Kurt Vonnegut's 8 tips on How to write a great story

"Every sentence must do one of two things — reveal character or advance the action."

Why not try to write your architecture so it is a great story?

Architecture that never refers to necessary qualities, performance characteristics, costs, and constraints is not really architecture of any kind This reminded me of a Ken Levine tweet which I can't find at the moment. Paraphrasing:"Oh you have a great game idea? Come up with a great idea that gets the whole team on board, can be done within budget and fits a large enough audience to make a profit... and then we'll talk".

Few speakers imparting such wisdom are backed by experience and proven track record. When you see projects all around suffering from lack of goals, platoons of developers trudging along without raising their sight and yelling "Why are we doing this?" to have someone tell you 100% success rate in software is possible is, if not a time traveler, an alien from a distant planet. I thought this is impossible, after seeing the diamonds in the presentations, diamonds present since the 70's I wondered why are they not implemented all around us?

I realised after the talk that to fully understand Tom and Kai's methodology you need to: have participated in at least two projects that failed; been in direct contact with a software quality and measurement practice such as CMMI; worked in at least two Agile projects (can cross with the ones that failed); having been in the frontline for gathering requirements from the client; and of course, being in a role where you define software architecture. That is a whole lot of IT-related experiences to go through.

This, coupled with the fact that software is invisible and software cities crumble all around us makes customers and CEOs afraid of fully trusting yet another methodology.

I've been on more than one occasion in the following dialogue. "What do you do?" "Oh I'm an architect"; "Ohhh, really?"; "Yes, a software architect" ; "Ah... I see". This drives me up the walls. I've shared with Tom my frustration that we software architects are not as respected as architects, lawyers or doctors even though we can affect larger numbers of people. Tom smiled and said: "I give us 50 to 100 years before we achieve this".

Time traveler indeed.

How to stay up-to-date with technology in 10 easy steps, Java edition

Added on by Mircea Botez.

This is not a self-improvement post.

  1. This tool/framework/suite/software has some obvious flaws that I can clearly single out after working with it for several years.
  2. I could make something better! I'll just take out the good, rebuild it with current technology and leave all the bad/outdated parts behind! This is surely my ticket to money and fame! (Alternative for the less deluded: there must be something better than this that I can use!)
  3. Attempt to get up to speed ("will only take a few mins") with new, competing technologies.
  4. Find out there are four eight 15 new frameworks, two new languages and 3 new build tools for you to grasp. The questions you are asking Google are no longer answered since 3-4 years ago.
  5. Start googling these new frameworks and languages, adding useful words such as "review" or "x vs y" or the ever popular "< framework >+< current year>" so you get the current status of the project instead of the hype-filled early material
  6. Find out there are new paradigms in how software is being developed. Research those paradigms. Definitely stop when you reach posts on http://lambda-the-ultimate.org
  7. Start asking Google the right questions with the right paradigms.
  8. Reach answers from the past 3-4 months on the topics.
  9. Congratulations! You have reached the present day. Have a beer and relax. Tomorrow you will be back at work where you will start mentioning all the cool new things you read about. Prepare lines to whine about currently used technologies by poor coworkers that have not been enlightened yet.
  10. Wonder where the last 3 hours have gone by. I was supposed to build something, wasn't I? Oh well, look at all the shiny new toys I have!

 

Today, the above steps for me were Liferay - >  ?? - > Spring ("Rod Johnson is no longer involved? Pivotal bought VMWare who bought SpringSource? It was only a few years ago I met Costin Leau at my local JUG!") - > Scala, Groovy, Akka, Play 2, Spock, Geb, Gradle, scala sbt, Apache Camel, jBoss Fuse, Heroku - > DSLs, Actors, Iteratees, live bytecode manipulation, stateless http, container-less deployment - > no beer. 

I have come to realise Java has become the Borg

Every couple of years some hotshot comes along with Ruby/Python/Language du jour stapled into the "Framework du jour" and demos in 5 minutes what it previously used to be a day's work; YOUR day of work. People flock, the Borg resists, adapts, assimilates, moves on in pursuit of perfection. 

Rails becomes Grails, Python becomes Scala and Ruby becomes Groovy. make becomes ant becomes maven becomes gradle. As the famous saying goes, "Resistance is futile". As long as you have learned stuff, don't despair! Adapt and assimilate...

Crumbling cities

Added on by Mircea Botez.
crumble |ˈkrəmbəl| verb
break or fall apart into small fragments, esp. over a period of time as part of a process of deterioration: (as adj. crumbling)

Software is like a huge city that can expand up to 10 times the current number of residents in a moment's notice. You could move the entire city (at some expense) to run above a stretch of water. We can repaint the walls of the city at the flick of a button. Similarly, you can remove an entire quadrant of buildings on a whim. The megacities have flying routes as well as dirt roads. They become more and more complex, more and more powerful.

They also can crumble in an instant. 

We've all seen it. Our favourite website is suddenly down. The airport lady smiles, noting that our delay is due to " a problem with the system". The clerk at the office is unable to serve us today, he says, because the "system" is rebooting. And every now and then your computer stops working for unfathomable reasons. 

The system is everywhere. It runs our cars, our water supply, our traffic lights and our banks. People accept them for good reasons: they make things faster, more accurate, shinier and apparently less error prone than pen-on-paper or human-operated mechanical systems.

The mystery of digital items is that no one ever really sees them, touches them. Even the datacenters, our closest physical representation of the "system", are somewhere in remote facilities. We further the trouble by naming and moving things to the "Cloud". 

Having been inside the city walls for what is now 9 years, I have to tell you, I'm mildly surprised computers manage to start every day. This post by Jean Baptiste Queru is a very neat explanation of all the little digital cogs inside the machine that are put in motion when you try to reach google.com via your browser.  See also a more humorous take on this.  

This is just you, connecting to google.com. Now imagine the system that runs our water, the traffic, our banks. 

Abstruse goose on computers

Abstruse goose on computers

We accept that we don't know how the bits and pieces end up from one computer to the other. There's a disconnect that happens in most people heads when they try to comprehend the "system". You might as well be telling them of purple tap-dancing zebras on a giant keyboard relaying the information at the core of the "system". 

My current nomenclature is that of Software Architect. Besides soliciting oh-that's-so-cute laughs from Real Architects, it entails that I create the blueprints of such a system. In order to be successful, I must gather information and get all parties to agree on a set course; find the cogs and bits that will solve the problem and think how they can work together; future-proof the system ; convince the team to follow my ideas and then stay the course; make a solution elegant and simple enough so that I can brag about it to my fellow colleagues. 

The reality is fraught with multiple contributors, legacy code that few people know how it works anymore, maintenance work done just to keep it running, ambitions of young developers eager to show their skill and ready to ignore previous wisdom due to its age. 

This is an industry born 60 years ago. We've rushed from platform to platform, technology to the next, improving and disrupting, dismissing entrenched principles and recycling them 10 years later.   The internet is now in your pocket and the world is a virtual one.

I wish that one day we will have regained our users' trust. That software takes humans seriously so that humans will trust software. So we can be trusted the same way  structural engineers are responsible for our buildings. I will probably hear back that we now have processes, methodologies, unit testing and continuous builds! That things are so much better than only 4 years ago.

It is true, somewhat. We've come out of the dark ages and started a user-centered renaissance.   

We still have so much work to do.  

Switching gears and locking paths

Added on by Mircea Botez.

If you don't know where you're going, how will you know when you're there?

Life, mentors, bills, luck and curiosity have steered me through many roads in tech. In 9 career years I've learned my way through Web, Java, Enterprise Java (two different beasts), Liferay, Web Services, iOS programming, game development and many other assortments.

I think I have finally understood my path. It's always obvious in retrospect.

I was 8 years old when my mother bought me an HC85 computer, the romanian clone of a Sinclair ZX80. She saved quite a bit for it and for programming lessons. Like any child I was fascinated by new things, but as my teacher tried to impart his wisdom on subroutines, variables and other intricacies of BASIC my eyes often glazed over. When he showed me how to draw a dot, a line and a moving circle I was hooked.

High school piled upon me heaps of algorithms, maths and abstractions that really struggled to get along with my brain. What latched onto me was the passion for computers that my CS teacher, Doru Anastasiu Popescu projected in his class. It was that spark in his eyes that lit a fire in mine. To this day it is one of my most important criteria when recruiting teammates. I am really grateful for his guidance and inspiration; I also hope he has forgiven me for earning 5 points out of 100 at that CS contest where he confidently sent me all those years ago.

You see, that contest, like many other computer science ones, involved reading data from a file, doing some funky math with it and writing into another file. No UI, only abstractions. I don't remember what was the problem description. I don't remember how two hours flew and in what town I was. I remember coming home red faced and thinking only of what will I tell my teacher?  

This would be one of the many moments of self doubt that would follow. Am I a good programmer? Am I really fit for this? Why are these people encouraging me? You want to assign a team of how many to me?

While this is cathartic for me, I am also writing as an encouragement to others. I think constantly questioning whether you are good at what you are doing is an important part of growing up as a person and professional. You also need to move past this and tackle the next challenge.

In the end I graduated with a MS-Paint-like program that had no less than 5 windows and many instrument toolbars. I was quite proud in having written this program from scratch and not redoing something from a previous generation of students. I clearly had a lot of fun writing it, testing it, laughing at weird display bugs. I distinctly remember presenting it to the commission not  worried that it won't work but with the enthusiasm of potentially expanding the program.  

I'm 32 this year and I've spent a lot of time riding technology's waves, trying to find my path. Again and again I've let my brain pull me into directions which seemed interesting to me but seemed mere distractions compared to "real work".

Two things remained constant throughout the years: my interest in user interfaces and my plethora of abandoned side-projects.  When I was a child, it was circles on a screen. In college, OpenGL graphics and multitouch interfaces. As a developer, "web frontend" work. As a senior developer, more frontend work except this time with enough experience for people to ask me how I think UIs should actually be built. As a Frontend Technical Lead (talk about title inflation!), a curse for junior developers when I spotted a 1px difference looking across their shoulder. 

With many projects on the path, I ran into the abstractions again. Web Services. Enterprise Service Buses. Data storage and distributed platforms. Big data and batch processing. BPM and data mining. A myriad of frameworks later, I've come to this conclusion: these are all wonderfully complex puzzles that algorithm minded programmers love to solve. I understand them, worked with them but that is not where my heart lies. 

I walked a good part of it, but now I've found out my path's name: User Experience design.

Gamification or how I've learned to stop worrying and love Gartner

Added on by Mircea Botez.

I've come across this lovely diagram from Gartner:

gartner-hypecycle.jpg

I don't rank Gartner content highly, but this diagram actually makes sense. Posted by Gartner in July, gamification was very much high on hype. We're probably knee deep in the trough of disillusionment by now.  I've just seen one of Gartner's presentations about the topic. Managing to stay awake, I think i've figured out why we're in the trough. I also found out why I willfully ignored the hype ramp-up and seen a few paths up the slope of enlightenment.

"Web developer by day, wannabe game designer by night."

The above Twitter bio is posted on my profile since september 2008. And yet, merging the two never occurred to me. Going down the list of obliviousness, I didn't pay attention to the likes of Foursquare or Gowalla until now. It didn't help much that gamification sounds like a marketing-drone-inspired word.

Why is that? Foursquare's "game" involves checking-in to a location. Its basic game mechanic, so to speak, is "push a button to get one point". The additional rule is "you can't push the button for a location more than once per day". Now, other than the difficulty of physically getting to the actual location, this doesn't seem like much of a mechanic, does it? Once you get enough points, you get a badge. If you best the other "players" in clicking, you may become "Mayor" of the location. In order to preserve your "Mayor" status, you need to do exactly what you did the other day. In gaming terms, this is called "grinding": " ... describe[s] the process of engaging in repetitive and/or boring tasks not pertaining to the story line of the game."

Of course, Foursquare has a business plan attached to their game, which is enticing businesses to offer deals and discounts to users of their platform. While in theory it sounds like a neat idea with a convergence of social, location, mobile and business and Techcrunch is reporting $600 Million valuation for them, I believe they are not sustainable by this model alone.  Why? Deals and discounts online internet star Groupon had a disastrous IPO just recently. Customers looking only for deals are not repeat customers. Deals may have promotional value but are not long-term revenue providers.

Games and more recently video games are defined by game play, types of goals, art style and so much more. People play games to have fun, to be immersed, to escape, learn and win. Some reward skills, some strategy, some just appeal to our emotions or have a compelling story. There is so much breadth to games and we are a ludic race. Games are everywhere around us helping us learn and socialize from an early stage in our lives.

Video games have now been sold for 40 years. They have spawned a multitude of genres, we are referring to the seventh generation of game consoles and are now a $35 billion industry.

Entrepreneurs, developers and designers have now grown with video games and have created businesses based on game concepts or adapted game concepts to existing businesses. Some of these have attracted a lot of attention and here we are where even Gartner is putting forth reports and analysis on gamification.

Fun and accounting

Games are designed to achieve what businesses yearn for: engagement, loyalty, positive emotions, word of mouth marketing. Good games stay in the collective memory of gamers for years, even decades. Some are still played decades later. (If we include chess and the like, we can extend this to millennia). How many of you still use (and have fun using) Win 95? Taking a look through at the Wayback machine [archive link] sites in '95, the only adjective you can label them with is primitive.There's an argument to be made about Google's minimalism all these years, but then again, a screwdriver is still a screwdriver. Yet people still play Starcraft 1 and Diablo 2, games which are 10 years old. Why is that?

There are (were?) many ways in which games differ from business software or websites. The ones I believe most important are: games focused on the emotional experience, are artfully realized, polished and fun.

Emotional experience: this is one businesses run away from the most. Corporate software is generally valued if it is configurable, adaptable, one-size-fits-all, politically correct and so on. In other words, boring, unremarkable, trying too hard. Eliciting emotion brings loyalty. It also polarizes (which is why businesses run away from it). People talk about loving/hating Apple products. Customers that love your product won't stop talking about it. They will build a fan-base for your product. They will be your evangelists. People that hate your product will blast you in online forums and show much vitriol. Nevertheless, I feel it is a tradeoff worth taking. 

Artfully realized: in another word, beautiful. Game studios are increasingly populated with artists instead of programmers; game engine development is nowadays a small part of game development. Each game menu is designed to match the entire experience of a game [screenshots here], even if you are adjusting the video card settings. Concept artists, game designers put a lot of soul into a game. 

Polished: even though games also suffer from deadlines and their share of bugs, the polish level is through the roof compared with other types of software. Game designers worry about pacing, rhythm, discoverability, guiding the player. Besides obvious bugs, testers are also noting how much fun they had while playing a level. Because it's a game, people working on games make sure there is nothing that can frustrate the player, break immersion or feel out of place in the game world.

Fun: surely accounting software cannot be fun? How can we turn "it's not frustrating" into "it is a joy to use"? Surely these are phrases every accountant would like to say. All the above combined turn games into fun experiences that people want to share with friends, talk about, and spend on it. 

Achievements versus gameplay

Achievements in the video game universe are a relative newcomer to the party. Microsoft, Sony and Valve have introduced their own systems for their own networks.

So what are achievements? "In video gaming parlance, an achievement, also sometimes known as a trophy or challenge, is a meta-goal defined outside of a game's parameters. Unlike the systems of quests or levels that usually define the goals of a video game and have a direct effect on further gameplay, the management of achievements usually takes place outside the confines of the game environment and architecture." (Wikipedia)

As it starts to click together, you can see that the badges and points that Foursquare and other "gamified" sites are peddling around are actually tacked on parts of a game, meta-goals. Sure, you can brag about it but think about it: when visiting the Eiffel tower, you're not going to tell your friends about how amazing it was to check-in with Foursquare there (if you do, there's something seriously wrong with you) you will tell them how beautiful the view from the top was. So we come to the problem and conclusion. All these gamified systems are lacking the actual game part. There's nothing emotional about them or artfully realized. 

Since I jumped on the hate train about one year too late, here is a list of links of other people thoroughly dismantling gamification:

Gamification = b*llsh$t

Gamification is Bullshit

John Radoff's Gamification article

Can’t play, won’t play

Issues of Gamification Design

And an excellent presentation by Sebastian Deterding "Pawned. Gamification and its discontents"

The slope of enlightenment

Now that I've made my case against gamification, getting closer to games is actually quite desirable. I will go through the list again: emotional experience, artfully realized, polished and fun. I think all software builders should strive for these, regardless of what kind of software they are building. Software should provide a free and safe place to play, encouraging people to try new things and not punishing them for their mistakes. Giving users feedback appropriately and timely puts them at ease. Establishing clear, achievable goals and rules, providing a challenge makes people engage with your software.

There is so much more that business software can do to get closer to games. Gamification in its current incarnation is not it. I'll leave it to future articles for concrete ways on bridging the gap. 

In the meantime, let's all try to bring a bit of fun into our users lives. 

How to avoid writing a blog post in 10 easy steps

Added on by Mircea Botez.

 

1. Complain you don't have enough time

2. Google your subject before writing down the first words.

3. Discover there are already 10 sites, a wikipedia page and several communities dedicated to the subject

4. Proceed to consume all of the above, including related links from the wikipedia page.

5. Complain loudly that you won't be able to cover this subject without knowing ALL there is about the subject

6. Get sidetracked by email notifications popping up on the phone

7. Oh, someone favorited one of my tweets.

8. Why don't I check out his blog, which may or may not be related to the subject

9. Complain that you can never get things done and are an absolute failure as a writer.

10. Make a top 10 list of why you can't write something about the subject.

 

The Twitter Echo

Added on by Mircea Botez.

Many people don't know what Twitter is. Some people have accounts and don't know what to use it for. A few have accounts, use them but still cannot define what Twitter is.

This is my personal take on Twitter. I'm not gonna explain the concepts, just what they do for me.

The gossip

Discussions on Twitter are a lot like gossip, except they are open to the audience of the intertubes. Short messages are prone to triviality and sometimes that's all people see when looking at Twitter and dismissing it shortly after. However, brevity is the soul of wit. I've been at times frustrated with the 140 characters limit but it proved to be a powerful creative mechanism and one of the reasons Twitter works.

The realtime

One of the most frequent uses of Twitter among the tech crowd is following conferences. Conferences that are overseas, that you cannot reach, even items you may have missed from the topics in the other hall. If 50 years ago it would take several weeks before you read a newspaper reporting on a conference on a specific topic, you can now follow as it unfolds. Extrapolate this to important news, sports, or any kind of event and you will see it ripple through Twitter's streams faster than anything else.

The reach

Because of the character limit and simplicity of Twitter's interface, I think it brings an intimate way of publishing. A celebrity needs staff to maintain a Facebook page, never mind an entire website. 140 characters can be typed in little enough time so that everyone, no matter how busy, can share their thoughts with the world. In the time I've followed some favorite actors of mine (or any kind of celebrity), I've been surprised by the level of candor and openness Twitter's discussion achieves.

The "It's not Facebook" factor

I've opened a Facebook account at my family's requests and kept it to share links from this blog. Compared to Twitter, I feel I am lost in a sea of videos, photos, and games. Every time I login on Facebook, I approve x numbers of "Friend" requests, block an ever increasing number of idiotic games and ponder at the amount of time people waste on it. While I am not sure which way the information flows in Facebook (it's all a big wall to me), I can definitely identify that in Twitter. You don't "Friend" someone on Twitter, you voluntarily "Follow". A quite important distinction. If you want to follow what someone says, it means the content produced by that person is important to you.

The echo

There are quite a few wordsmiths on Twitter that condense relevant, witty nuggets into 140 characters but looking at my stream I see about 80-90% of tweets being links passed through. An opinion in a few words with a link from a person you respect give that link a lot more weight. Following some intelligent people turns Twitter into a clever filter for the internet. This is also why many Twitter clients have first-class browsers included in them. In fact, I am now checking first my Twitter feed for news and my Google Reader account after. An important part of Twitter is something which evolved from the community to be later included as a platform feature. The Retweet. This is Twitter's echo, gossip and reach all into one. One person posting the value content is retweeted by its followers to their followers and so on. Like an echo traveling at the speed of light (and mouse clicks) you can now gauge how important some news is by how many retweets it had. Why do retweets work better than any other sharing mechanism? Because you are actively following the person who retweets. Someone retweeting has deemed this worthy and there's a good chance you will pass that along to your followers saying "Hey guys, this is worth checking out. Trust me!". I think this social graph of trust is Twitter's greatest strength.

There are many ways in which people use Twitter. This is mine. What do you get out of it?

Design like Iron Man, part II

Added on by Mircea Botez.

The Iron Man returns! This time accompanied by Playstation Move, Novint Falcon and Leonar3Do

As I went to see the second installment of Iron Man, I kept thinking whether they will still have the cool visualizations and interfaces present in the first one. Maybe the audience didn't really respond to them? Was it only a geeky segment that would be cut off for more space pursuits and lasers?

Turns out my fears were completely misplaced. Not only did they increase screen time, they went crazy with ideas! 3D scanners! Surrounding interface! Digital basketball!

So, keeping with that theme, if you thought part one had too many videos, prepare for another video-post exxtravaganza!

Let's run the analysis on this one. If in the first movie I referred to the Autocad-like interface as direct manipulation. In this video, the interface Tony Stark is using expands that concept, placing the operator in the middle of the interface, stopping a few steps short of Star Trek's Holodeck. Despite that, I'll try describe how we would start building such a thing.

Again, center stage is accurate response to direct manipulation. But this time, Tony is surrounded by the elements, by his engines, digital representation of his suits and is able to manipulate any of them as he pleases. Do note that he is moving through the room/garage when working. So if previously Tony used a stylus/table area on which he worked - focus is on working area, Tony is facing the "computer" -  now the entire mode of operation is focused on him. Pardon the corniness, but I'd call this type of interface a Renaissance UI, as it redefines itself around its human operator. How would we go about building this? Skipping the topic of holograms for now, let's presume we have a way to display the 3D elements in a spatial manner. We would need to track the entire "work area", meaning the room, for accurate positioning within it.  Each of the digital entities would need to have an XYZ coordinate. Once that is in place, the human operator's position and full body posture needs to be digitized and accounted for. There is a large amount of hand and head tracking in place.This needs to be very accurate and responsive. In this particular instance, head tracking needs to be especially sensitive. I think that Tony's work room would have to be absolutely packed with sensors, projectors and cameras, sitting above a cluster of servers that would process the entire thing. To top it off, the room is not a cube-like (even perhaps dome-like), completely empty box as the Star Trek holodeck, there are cars! desks! robotic equipment! Oh the headaches these Iron Man designers give me...  As Robin Williams once said about golf, "They put shit in the way!".

I would say that a dome-like, empty room, inward projected, 3D imaged, with full body tracking would be an interesting middle-of-the road solution. Sure, you might not get to run around it, crumple engines into basketballs and dunk them into previously invisible baskets  ( Note to Iron Man designers: yes, sometimes, programmers like to put in easter eggs, but come on, a virtual basketball forming out off nowhere?)

Dome.png

So what do we have here? Enough projectors to cover all viewing angles (depicted only two), a similar array of sensors for body tracking and our main character. Now, I didn't give him glasses to make him a geek, I gave him active shutter glasses to see the 3D images (the red cube).

Hypothetically, let's say we have gathered all the needed hardware. Several dozens projectors, a lot of tracking sensors and 3D glasses. From the software point of view, we would need a complicated process to render the 3D imaging. Active shutter glasses work best when the distance between the screen and glasses is known (more on that later). Then we would need an approximation of  where the objects are when "seen" in 3D to give the XYZ for a certain element. We need to do this to realize the accurate direct manipulation.

Leona3Do makes 3D work

Earlier this year I found out about a 3D editing solution for *CAD software. Here is their video.

This is a very impressive demo and a really useful tool for modelers.It's available for purchase for 750 €. Leonar3Do is the first commercial offering I've seen that is truly a 3D sculpting tool. The setup is rather involved, with 3 hardware dongles, screen measuring, distance points mapping on the monitor and so on. However, once it's up and running, it's a very interesting solution in the field of visualization. It is not as accurate as the g-speak (doesn't detect hands), not as setup-free as the Kinect, but it comes closest to our Renaissance UI, because it projects 3D images in a real space where they are directly manipulated and seen as real objects .

This looks like our solution, right? Well, it does have several limitations, some of which can be eliminated, some that can't be.

Accuracy on the Move

The Playstation Move accessory for Playstation 3 provides an interesting mix of technologies.

The Move works in combination with the PS Eye camera for the PS3 and it also capable of head-tracking and accurate 3D manipulation. So how does it work? In the demo, you see the Move controller's motion can be detected back and forth in 3D space.  The colored round ball on top of it is responsible for that. The PS Eye camera picks up the ball, sees it as a circle and measures the diameter. Based on that diameter it can calculate the distance. The most impressive part of the demo is, however, the accuracy of the sensors within the controller. Even the slightest changes are picked up by the controller and rendered accordingly. Two accelerometers in each Move controller are feeding back data to the PS3. Together with the Eye, they provide a full picture on XYZ coordinates of the controller and its orientation. I have not seen any convincing 3D (active shutter glasses variety) demos with the Move yet, but I think it has the potential to surpass Leonar3Do in 3D sculpting. I've read that some internal Sony teams already started to use the Moves as 3D sculpting tools. For more *ahem* entertaining uses, see this.

So, 3D, accurate, direct manipulation? Sure, it would be nice if it had the firepower to work only with our hands, but Sony realized the hardware and software are not good enough at this moment for that. As it turns out, you need some buttons. Honestly, I think Microsoft made a big mistake when it embarked on the "You are the controller" slogan. The Kinect would have been perfect working with the Move controller. As it is, Sony is supplementing the lack of sensors in the Playstation Eye with software and the Cell processor's huge capabilities for number crunching. So, while Kinect sensitivity might increase in the future, I believe Sony delivered the better solution for this moment.

And there is the one thing extra that Move brings to the table. Feedback.

The case of the missing feedback

A topic I have not covered yet is input feedback.  It's missing in Iron Man, Kinect, g-speak or Leonar3Do and it's a huge part of the experience. It's also one of the problems that falls in the same category as accurate holograms: something for which, given the amount of freedom we would want in the Renaissance UI, we don't have a solution yet.

Why is feedback important? I've written about the benefits of having an interface that mimics physical properties. Kinetic scrolling, inertia simulation, acceleration and deceleration are all bits that enhance the user's experience and provide familiarity in a new setting. We all learn the physics of this world from the earliest age. A natural user interface would be usable by a 2-year old, as it would mimic the world he already learned. And if your interface is usable by a 2-year old, it means other users will thank you for it. But I digress.

The missing bit from all this is reactivity or feedback. In the game console vernacular, it's also referred to as force feedback. You grab a stone, it is heavy. You pull a string, it opposes resistance. You press a key, it has a travel distance and a stop point which gives you a confirmation that a key has been hit.

In all the Iron Man scenes, that feedback is missing. In playing with Kinect, that feedback is again missing. To accurately throw an object and hit a target, our body has learned to apply the required amount of force based on the object's weight (sensory input), distance(visual input) and previous experience (throwing other objects). Hitting a target through air can be hit or miss. Typing several paragraphs of this article on an iPad, I can tell first hand that missing feedback is the thing that kept my typing speed down the most.

The Playstation Move has a force-feedback engine inside it. So, while driving a car, hitting a wall would make the controller bounce in the opposite direction. This adds a lot to the realism of operating such an interface.

The Novint Falcon product is probably the best feedback device on the market right now. ArsTechnica had this to say about it:

When you fire the shotgun, the orb kicks in your hand; it's impossible to fire any fully-automatic weapon for more than short bursts while keeping your aim in one place. This is interesting, because the game suddenly becomes much more tactical—you have to think about what weapon you're using, and aiming is much more difficult. I mean that in a good way; this feels more real in many ways.[...]The Falcon makes Half-Life 2 much more engaging and immersive.

I really don't have much to add on this, so please check out Arstechnica's review.

 

In closing, there are many challenges ahead for building a real-life Renaissance UI, but with systems like Kinect, Move, g-speak, we are moving closer to that reality every day. Besides g-speak, you could purchase any of the other devices and have unique and novel interfaces for computers in your living room.

We are still at the very infancy of interfacing with such controls. Navigation, data visualization, gestures, workspace organization are all things which will be solved in software and that we will need to figure out in the coming years. I will address my ideas for most of these in a future series of articles.

There is something to be said about sparking imagination. The role of filmmakers is inspiring real life - Star Trek, Minority Report, Iron Man all raised a bar that we can now only reach with our imagination. And it is that imagination that drives us forward to build a ladder to that bar.

And now, here's the Iron Man song, as performed by Black Sabbath. No, it has nothing to do with interfaces, design or innovative controllers. It's just 5 minutes of awesome rock.

 

 

Design like Iron Man

Added on by Mircea Botez.

What is the next step in natural user interfaces? And what do comic book heroes have to say about it?

Iron Man 2008

Iron-Man is one cool hero and Robert Downey Jr.'s 2008 portrayal made it even cooler than the paper concept.

But what did a geek notice while watching the movie and sighed wanting the same thing? No, it's not a supersonic costume with limitless energy, but the 3d direct-manipulation interface that RDJ used to build said costume.

Though present only for a very limited time on screen (after all, there are baddies to kill, damsels to save), it was an "oh wow" moment for me. Two-dimensional direct manipulation is already here, thanks to the likes of Surface, iPhone and iPad and multitouch technology. But what about accurate, responsive 3d manipulation?

 

 

A discussion I've had some time ago made me think again about the interface envisioned in Iron Man two years ago. It is, I believe, the holy grail in computer interfaces. Direct, accurate manipulation of digital matter in three dimensions. The software used in the movie was representative of an Autocad package, used in modeling houses, cars, furniture, electronics. Let's analyze the specifics in the video and see how further along we are in building this.
No interface elements. There aren't any buttons, sliders or window elements. The interface simply disappears and you are working directly on the "document" or "material". When designing any software product, the biggest credit an UI designer can get is that the user doesn't notice the interface and the he or she just gets things done.
Kinetic/physical properties. Movement of digital items has speed and acceleration (rotation of model in first video), mass, impact and reactivity (throwing items in recycle bin unbalances it for a moment). All these are simulated for one purpose: making the user believe he is manipulating the digital entities in the exact same way one would act with physical objects.

Advanced (3d) gestures for non-physical manipulation. There are, of course, a large number of actions a computer can perform which don't map to a physical event such as the extrusion of an engine.

Accurate 3D holographic projections. The digital elements take a physical shape and are projected in 3d space.

Accurate response to direct manipulation. There are no styluses, controllers, or other input devices used.

So how far along are we? If you think this is all silly cinema stuff, prepare to be pleasantly surprised.

Learn to g-speak

G-speak is a "spatial operating environment" developed by Oblong Industries. If the above demo didn't impress you, check out the introduction of g-speak at TED 2010 and some in the field videos as this digital pottery session at the Rhode Island School of Design. The g-speak environment speaks to me as Jeff Han's multitouch video did back in 2006. It took 4 years for that technology to appear in a general-purpose computing device as is the iPad. The g-speak folks hope to bring their technology to mass commercialization within 5 years.  That sounds pretty ambitious, even coming from the consultants of the 2002 Minority Report movie. Right?

Bad name, awesome tech: enter Kinect

The newly released Kinect is an addon for Microsoft's XBox gaming console. Whereas g-speak is still some years away from commercialization, you can get a taste of it right now with the Kinect.

Combining a company purchase (3DV), tech licensing (PrimeSense) and some magic Microsoft software dust, Kinect was born. Here are a few promotional videos, if you can stomach them. And here's Arstechnica's balanced review. According to PrimeSense,

The PrimeSensor™ Reference Design is an end-to-end solution that enables a computer to perceive the world in three-dimensions and to translate these perceptions into a synchronized depth image, in the same way that humans do.

kinect-diagram.jpg

The human brain has some marvelous capabilities for viewing objects in 3D. It is helped by an enormous parallelism capacity. It only needs the two inputs from our eyes to tell distance. Not disposing of the brain's firepower, Kinect uses a neat trick: projecting infrared rays into the room with a IR light source that are picked up by the second camera (a CMOS sensor).  Here's how the Kinect "sees" our 3D environment.

kinect-shell.jpg Wall-E, is that you? © iFixit

Besides the IR projector and IR Receiver, Kinect also comes equipped with a VGA camera and no less than 6 microphones.  Microsoft took the PrimeSense design and added the VGA camera for face recognition and help with the 3d tracking algorithm. The microphones are used for speech recognition; you can now yell at your gaming console and it might actually do something. All this in a 150$ dollars package that you can buy today.

From some reports I read on the net it appears Microsoft spent a lot of money in R&D on Kinect. The advertising campaign alone is estimated at something like 200 million $. It is only natural to assume that they have bigger plans for Kinect than just having it remain a gaming accessory.  I believe Microsoft is betting on Kinect to represent the next leap in natural user interaction. Steve Ballmer was recently asked what was the riskiest bet Microsoft was taking and replied with "Windows 8". The optimist in me says Windows 8 will be able to use Kinect and have a revised interface to suit 3d manipulation. The cynic in me tells me that he was talking about a new color scheme.

So what do we get? Almost no interface elements, kinetic/physical properties, advanced 3d gestures from the original list. They added some really cool stuff via software, such as, in a multiplayer game, if a person comes into a room and he/she has an XBox Live account, they are signed in automatically into the game, simply via face recognition. Natural language commands bring another source of input for a tiny machine that knows much more about it's surroundings than previous efforts.

What do we miss? Holographic projections, accurate response and, something missing also from Iron Man's laboratory, tactile feedback. The early reviews for Kinect all mention this in one way or another... Kinect's technology, when it works, is an amazing way to interface with a computer. When it breaks down, it reminds us that there is still a lot of ground to cover. Microsoft's push for profitability (understandable, remember this is a mass-consumer product) removed an image processor from the device. This means that it needs an external processing power. The computing power reserved for Kinect is at the moment up to 15% of XBox's capability. The small sized cameras and their proximity requires a distance of 2-3 meters from the device in order to operate it successfully. Because of the small amount of processing power reserved for it, Kinect's developers have supplied the software with a library of 200 poses which can be mapped to your body faster and easier than a full body scan. You cannot operate it sitting down; it's my opinion that this is a side-effect of the 200 pre-inputted poses. You can also notice in the g-speak video above that their system reacts to their tiniest change, even when moving just their fingers. How do they do that? By using 6 or more HD cameras (and tons of processing) per second. The 340p IR receiver and 640p video camera just doesn't cut it for such fine detections. This is , again, an understandable means of reducing the cost.

On the other hand, Microsoft made a great move by placing Kinect 1 next to a gaming platform. Games are by their nature experimental, innovative processes. This gives everyone huge amounts of freedom to experiment. Made a gestural based interface and no one likes it? You can scrap it and the next game will try something different. This will give Microsoft valuable data for improving Kinect and filtering out bad interaction paradigms.

Kinect has a chance to evolve and become the next natural way to interface with computers. With increases in processing power, accuracy will increase. If you want to play like Iron Man, you can do so now with Kinect.

In the next installment, I'll talk about the accuracy, feedback, Playstation Move and the (sorry) state of holographic projections.

 

Dear recruiters

Added on by Mircea Botez.

I started writing this on LinkedIn, but then couldn't stop.

Dear recruiters. I am happily employed, but as anyone, I sometimes go through my spam folder. So here are some tips, free of charge. They may help you at your job. On the highly unlikely event that actual recruiters will read this, here it goes.

You want to get someone to work at your company or the one you are recruiting for? Please take an extra moment to make sure that:

  • the position you are recruiting for is not lower than the one that is listed on someone's profile page. I'm sure Junior developers are highly praised and looked upon as gods at your companies but I left that position 6+ years ago.
  • you are actually recruiting in the general technology area that my profile is listed. PHP may be the best technology ever for you, but  I won't go near it willingly.
  • 3000+/5000+/INFINITY+ connections? You must get up really early in the morning. Here's a cookie.
  • I am simply dying to be kept up-to-date with market opportunities that fit my profile. I'm sure you'll leverage your social connection synergies to match me to some large-enterprise/huge opportunity/promising startup every day now.
  • Spam is hated for a reason.  Don't be a spammer. Generic messages sent to hundreds of people don't work.
  • "We would be grateful if you could send us your contact data". I'm not sure how this message was even delivered to me since you clearly don't have my contact data. Besides, with my LinkedIn profile, Facebook profile, this blog, my Twitter account, there is simply NO WAY to get in touch with me. Now, where shall I fax my contact info?

Take time to read someone's profile, blog, interests, use correct spelling and common sense.  I'm sure the people you work for will appreciate you getting the attention of people they will then want to work with. The keyword here is people. Not candidates, not potentials, not resources. In the end, it's always people who do the work.

The year of mainstream Linux, 2010 edition

Added on by Mircea Botez.

Synopsis: Here's my tale with Linux over the years and why I believe Android fits the bill for this article's title. But first, a bit of history and how the desktop had to change for Linux to be on it.

Ah, Linux... champion of open-source, love of computer geeks everywhere and owner of the cutest Operating System mascot around.

Most of my colleagues label me a mac-freak. With two Apple laptops, two iPods, an iPhone and a Time Capsule in the house, I can't really blame them for it. However, not all of them know that before hooking up with Apple I had a three years stint with Linux.

This was almost 8 years ago, in a land where there was no Ubuntu, Android wasn't even an idea and editing /etc/X11/XF86Config-4 was the only way to change the screen resolution. It's been a while since so allow me to reminisce a bit.

"Damn kids, get off my lawn!"

I started off with Red-Hat 7.1, in a brave attempt to have a triple booting system together with Windows 98 and then newly released Windows XP. I sat down on a Friday afternoon and emerged from my room Sunday around lunch. I slept about 5 hours through the whole process and after 20-something installation attempts of the three operating systems, I abandoned in defeat.

I was intrigued by my failure in the realm of technology while thinking of the new worlds I had learned of. It's funny how Microsoft's domination on operating systems market share made it strange to even question the status quo or think about alternatives. Learning about open source, volunteers, Unix, command line, kernels, distributions was as strange to me as was coming out of the Matrix to Neo.

A few months later and several stubborn sessions ("you will learn vi's commands or starve at this keyboard") I had learned a great deal about operating systems, partitioning, package management, scripting, window managers, boot loaders and other assorted varieties of unix-y knowledge. I firmly believe that any developer should know the innards of his preferred operating system as well as what choices may exist. This knowledge will help him/her write better code in some instances but most importantly will help debug software when something goes wrong at the lower parts in the technology stack.

Red Hat, Mandrake, Gentoo, Slackware, Debian all served time on my desktop. Lycoris Desktop, Linspire, Yoper, Mepis, FreeBSD and other curiosities such as Linux From Scratch had a brief run during a period of experimentation. Knoppix was making the rounds as the first usable Live-CD Linux distro, a feature now common to all distributions. At the time, running an OS from a CD was nothing short of amazing, even if agonizingly slow. Debian Unstable eventually became my base OS and xfce my window manager of choice following testing of Blackbox, Fluxbox, IceWM, WindowMaker, Enlightenment, and of course, KDE and Gnome.

I remember clearly spending one week trying out kernel builds on the 2.4 branch, ranging from keyboard-biting frustration to enlightening exhilaration. I made some really good friends that taught me as I went along. I understood communities and open source. I joined a LUG and went to a conference. I also didn't spend more than 6 months running the same OS on a daily basis.

Three years is a long time to run such an experiment, but I don't regret doing it (I also don't recommend Slackware or LFS to anyone, either). I probably learned more about computers during this time than in any other period. I wanted to start a business on Linux consultancy.

So what happened?

University years passed by and pretty soon I needed my computer for "real work". Eventually the thrill of discovery and learning wore off and I became weary of spending hours configuring things just to make them work. My respect for the Unix-way of doing things remained, so I couldn't go back to Windows. Ubuntu was just a blip on the radar in 2003-2004. In spring 2005 I ran across John Siracusa's excellent review of Mac OSX Tiger and the course was set. John's reviews have been epic enterprises over the years, sometimes expected more by the community than the actual releases of OSX. His attention to detail, precise critique and detailed Unix knowledge drew my admiration and desire to learn more of this OSX. One typical feature of his reviews is the attention to the aesthetic. All of these, I would later discover, are things highly treasured by the mac-community; I'm sad to say the latter one is still absent from their linux-minded counterparts.

Three months later I did what any self-appointed geek does at some point: buy the most capable computer he doesn't really need. I embarked on a dual-CPU, 2.7 GHz G5 Powermac and put my Linux days behind me.

A modern, unix-based operating system set up on top of FreeBSD meant I would have the Unix strength beneath the hood while at the same time benefit from an interface built with usability and speed in mind. Sure, I might give up some "freedoms" found in the Linux world, but really, how many times do you need to change window managers?
Which brings me to the topic at hand.

Mainstream, schmainstream

Mainstream software as a concept lives and dies by the amount of people using it. Software ecosystems thrive when users drive demand that developers strive to meet. I'm not going to mince words here. Where operating systems are concerned everything outside of that is a highly specialized tool, an academic experiment or a hobby.

It so happened that during my years running Linux and thereafter I ran across several articles, forum posts and discussions as to which year would finally be the year of "mainstream" Linux. What drove linuxists to this goal besides recognition and free software ideals?

Linux developers were united by another thing. An idealistic underground current against the Microsoft "opression". Even today, Ubuntu's Bug No.1 stands as an example of this counter-movement.
Microsoft has a majority market share in the new desktop PC marketplace. This is a bug, which Ubuntu is designed to fix.

Microsoft's monopolistic strategies of the past, shady business decisions and outright hostile campaigns against Linux painted a big target on its back. Flame wars ensued, parodies popped up, salvos were fired from every camp. "Microsoft is evil/no it isn't/yes it is" flame wars will eventually pop up in any tech community.

It's a known thing that humans are uniting easily against a common enemy and rally behind heroes in any battle. And although Microsoft has always been the "enemy" for the Linux camp, a true "hero" never quite emerged. I've often thought of Ubuntu as of a pacifying unifier of the various Linux tribes while at the same time spreading a message of love and understanding for users.

The other OS company running in the mainstream race, Apple, faced the same upstream battle against the Microsoft monopoly. They had a more focused approach, a lot of money and still, after many years are still placed somewhere between 5 and 10% market share worldwide.

The desktop wars were won by Microsoft a long time ago, and the Windows+Office+Exchange+Sharepoint combination will be hard to "beat" in the near future. Apple had a clean break with the iPod, the iPhone and pretty soon with the iPad. Google won the internet race and Linux is hard at work on servers, embedded devices and phones.

Rise of the replicants

Since November 2007, a new hero emerged in the Linux community. Android took on a long path from a Palo Alto startup snatched by Google in 2005 to an alliance-backed open source contender for the mobile operating system crown.
Microsoft ignored the web and Google snatched it away. Microsoft also ignored the mobile space and Apple stole the spotlight. Nokia struggled in unifying its many platforms and UI toolkits. RIM focused on email and business users while HTC took upon grafting a modern, pleasant interface on top of the aging Windows Mobile platform. Apple had shown with the iPhone that consumers appreciate usability with a top notch media and web interface. Mobile device manufacturers needed a modern operating system with a big software developer behind it.

This is the landscape in which Android was introduced by the two Google founders who rollerbladed their way through business suits when introducing the HTC G1 phone.
340x_lollerskates.jpg Rollerblades ©Gizmodo
g1_launch_suits.jpg Suits ©Engadget
The G1 launch didn't set the world on fire, however slowly but surely, Android gathered a lot of momentum. An army of droids is being assembled as I write this (tip'o the hat to my friend, Mihai).

Being used to lengthy flamewars in the past, i was slowly recognizing a trend among comments on sites I frequently visit related to Android articles. However, it really dawned on me that Android became "the hero" for the Linux community after reading David Pogue's amusing followup to his Nexus One review:
Where I had written, "The Nexus One is an excellent app phone, fast and powerful but marred by some glitches," some readers seemed to read, "You are a pathetic loser, your religion is bogus and your mother wears Army boots."[...] It's been awhile since I've seen that. Where have I seen… oh, yeah, that's right! It's like the Apple/Microsoft wars!
Yes friends, wars, passion, heroes! Being an iPhone-toting Java developer among open-source enthusiasts in our company, I soon started to get looks and remarks as "yeah, that iPhone guy who bows to Steve Jobs". Because you see, Android managed to unite two battle-fronts: both Linux developers as well as Java developers (but that's a topic for a future article).

As I mentioned earlier on, I've always been a supporter of Linux, even if not apparent at first glance. That's why I always get a laugh when overhearing the above line. At the same time, I'm glad to see passion among developers for a Linux-based platform . I truly believe passion is needed to bring people to create software, develop an ecosystem, rally behind an idea and yes, bring it into the mainstream. This guy had the right idea, if lacking a bit in style.

Despite my continual purchases to Apple, I also believe competition is good. And unfortunately, besides Android, there haven't been many (or few) to light up fires under Apple's iPhone platform, forcing them to react to its shortcomings.

But why a phone? And surely, if we aren't "winning" on the desktop, it's not truly winning, is it? Apple and Google may have targeted phones at first because of different reasons and backgrounds, but found themselves on common territory. Here's my take on it.

The Desktop has been gradually shifting away to the Mobile. Laptops, smartphones, tablets, e-readers. The computing landscape changed in the last years, a fact obvious to many. We are witnessing a mindshift, a transition from general purpose computing to device and activity specific. Reduction of costs, size and increasing computing power made the original iPhone twice as fast as my first computer and the Nexus One five-to-six times as fast. What about constraints? Memory, storage, screen estate are all premiums on mobile devices. You can't just plug in another hard drive. You can turn it however into a valuable asset in creation: focus.

I believe the focus on this class of devices and consumer-orientation made them a success. Why is that? Targeting a reduced platform, a niche if you will ensures you don't get distracted or waste resources. You can fail without taking down the company. It's a relatively low-risk avenue. It's an excellent test-bed for new interaction and UI paradigms. And if you play your cards right and use the correct development method, you can then expand your Operating system onto other generic devices that eat up a pie of the desktop's hegemony.

Interestingly, Apple and Google arrived here from different roads. Apple leveraged its iPod legacy of industrial design and its flexible OSX platform with a focus on media, entertainment (much of that being games) but also a premier web experience. Google wanted to leverage it's excellent infrastructure for the "data in the cloud" paradigm while promoting Linux to mainstream use.

Google made the laudable decision and kept Android open-source. As a result, with people starting to use it for ebook readers, upcoming tablets and netbooks, enterprising developers are rapidly expanding Android's reach. The emphasis on portability, battery performance and Google's focus in this area will ensure that for some time, Android will remain a mobile-devices OS.
androidfriends.jpg Here's to more Android friends Image ©Richard Dellinger

And that's a really good thing, because mobile is where the desktop is now.

Building up consensus over multitasking and the iPad

Added on by Mircea Botez.
Miling Alvares over at smokingapple is on the same page with regards to multitasking on the iPad.
To me, multi-tasking is a workflow. You want to do more than a couple of things at one time. And by one time, I mean “during one session”—because there’s no way you can possibly devote your vision to two tasks.

He points out something I've forgot to mention: the iPad is much faster at switching apps than the iPhone and iPhone 3G, bringing a bit closer the idea of seamless task-switching. Rob Foster materializes the iPad-using grandma I was alluding to:
My mother-in-law walked in the door the day of the keynote and the first thing out of her mouth was “Did you see that new Apple iPad? That looks like it would work for me. Would that work for me?”
Via daringfireball.

The iPad, multitouch and multitasking

Added on by Mircea Botez.
I will try to put aside my feelings as much as I can in this piece. Having studied multi-touch technology for my diploma thesis and being an Apple user for several years, I am bound to be a bit biased, so let's get the obvious out of the way.

I believe Apple has sought to redefine computer interfaces by means of multitouch technology. Its work started with the iPhone, on which they iterated three times, both in software as in hardware. How did they do that? A good product, a large developer ecosystem, and the App Store for easy customer consumption of software. Many people believe, and would have justified reasoning, that Apple imposed the App Store as a gatekeeper for developer money and platform control.

In light of the recently revealed tablet, another idea cropped up in my mind. Some of the tech pundits said that the tablet was the thing apple sought out to build in the first place. After seeing the deluge of applications and software experiments that flooded the app store, I think that the iPhone served as an excellent test bed for multitouch applications. 120.000 apps? 3 billion downloads? Apple turned the App store/iPhone into a giant usability test. Apple itself built very, very little software for the iPhone, purposefully so. They make the simplest thing that works then polish the hell out of it. Compared to the 120k apps, Apple's code is just a blip. Developers and users all took part willingly to this giant test, while Apple observed, took notes and made a fair bit of cash.

Of course, the iPhone is a cool device, it was marketed well and there were tons of logistics that were behind it's financial success, but I see that as a secondary driver for Apple's intentions. David Bowie once said every artist needs to release, every once in a while, an album that has a marketable sound, so he can then write three more that are truly expressing his creativity.

I believe the iPhone was one of Apple's commercial albums. It will soon become a commodity, driven by lower yearly costs, while the tablet will be the start of a series of masterpieces. In fact, I will liken this first iteration to an early sketch that a master does before taking on the masterpiece, a study, if you will.

iPhones, developers and user interfaces

If you were to say "I will redefine the way computing is done and introduce a completely new way of interacting", how would you go about it? Will you develop and refine UI guidelines, samples and conduct tests in secret for 20 years and then release it as the perfect way of operating computer interfaces by virtue of touch? Of course not. Market reality coupled with society's push towards commercial success prevents such things from happening.

How would you make sure that others follow your lead? How would you get developers to switch their mindset towards creating user interfaces centered on touch? Sure enough, some developers had been waiting for years to create such interfaces, with high attention to detail and high esteem for user-centered design. On the whole, however, software has become a messy affair. Quality tends to get sacrificed in favor of features and time to market, marketeers and project managers drive products instead of designers and engineers. I have seen it many times first-hand.

When the iPhone SDK was introduced developers complained that their favorite language is not allowed on the platform, that their toolkit is not used, or other favorite toys not being present. There were harsh criticisms of the App Store, mostly related to Apple not allowing this or the other feature for developers. Some developers were quite vocal about their displeasure with Apple, even going so far as threatening to leave the OSX ecosystem for good. Flash, Java, open-source supporters all cried out that Apple is playing a monopoly card and that Steve Jobs is a control freak.

Apple mostly ignored these and provided some excellent APIs in the form of Cocoa Touch and a lot of ready-made controls for developers. It took a while to create them, but their quality ensured that many developers would use them. Some happily, some grudgingly took to XCode and started building interfaces that are not only usable, but simple, functional and keep a close line to the look and feel of the iPhone OS, ensuring user familiarity with iPhone applications way of working.

Two years later and 75 million devices sold, several proficient developers and consumers declared "whining will only get you so far, good stuff will get you our cash".

In the past two years, Apple has taken its time with releasing new features. Copy-and-paste and other missing features became a running joke among tech writers and blogs. The App Store was seen (and still is, in my opinion) as a walled garden. Many have seen in this greed and need for control. I think there is another point to it, specifically that Apple constrained developers to think more about touch-based input and how to design quality apps around this new way of interacting with a device. Apple set the example with it's few released applications on the device, and then slowly with the approved ones. Hence, the big usability test I was talking earlier on.

Multitouch, like the mouse before it, is definitely not new. [Sidenote: In the upcoming article about the history of user interfaces, I will detail more on this subject.] . The "public at large" however, was largely unaware of it until the release of the iPhone. Many developers were also in the same boat. Software for mobiles was, for the large part, an attempt into shrinking existing desktop interfaces into tiny mobile screens. Scrollbars, windows, tiny buttons made their way into the devices while conjuring up the stylus as a way of "manipulating" the interface. The iPhone was the first "smart" phone to do away with all of those. Almost everyone who uses one, even temporarily, praises it's ease of use.

The big elephant in the room however, was the lack of multitasking. We've become so used to this in computing, and the iPhone does so many things akin to a "real" computer, it wasn't long before all of the techies were wondering when will Apple "get to it" and flip the switch, enabling us with the much needed multitasking we so dearly love.

Having jailbroken my iPhone, I wasn't bothered too much about this absence. It is a multitasking OS, it's running Unix, for crying out loud. A little worry inside my head was saying "Apple is being sort of mean, not allowing this for third-party apps". For some time, my concerns were alleviated by Apple's defence of "it's a phone, you don't really need multitasking. Besides, running many things in the background will only drain your battery too quickly".

The iPad affair

And then came January 27. The device which had been hyped for years was finally unveiled by Steve in typical Apple fashion. To say that the reveal was not polarizing would be to ignore ice proximity on-board of the Titanic. My good friend and colleague Rares promptly sent me this funny take on the iPad launch commercial from College Humor. Numerous pundits started pouring out vitriol towards Apple and the new device as if it's failure would doom mankind. No camera! It's just a big iPod! Where's the innovation? Couldn't you pick out a name that is less like a feminine product?

Many bloggers took to critique, with some notable exceptions of a few people that were actually there.

I will admit, I was pretty upset about the lack of certain features. As a multitouch enthusiast, I loved my iPhone but I was expecting a "real multitouch computer" any day now from Apple, one that would allow me to develop freely on the device, one that is as close as possible to the computers we have now, one that would finally bring us a large multi-touch surface on a modern operating system.

Sigh. The iPad is clearly none of those things. I was a bit confused at first for the "why". I had heard a few sources that Apple is working on iPhone OS 4.0, which would finally enable multitasking (although only on select devices). Surely, a bigger, more capable device can handle that? And if that device is indeed more capable, then surely the App Store is not needed anymore?

But that is not the device Apple chose to make. Apple decided to take a risk. What is that risk? Building a multitouch, consumer-oriented operating system. I believe the iPad is the "grandma" computer.
Fraser Speirs put it best: The people whose backs have been broken under the weight of technological complexity and failure immediately understand what's happening here. Those of us who patiently, day after day, explain to a child or colleague that the reason there's no Print item in the File menu is because, although the Pages document is filling the screen, Finder is actually the frontmost application and it doesn't have any windows open, understand what's happening here.
Remember the large usability test with the iPhone I mentioned earlier? Turns out 80 million people or so are doing just fine without multitasking on their phones. I recalled a few links and an actual study which didn't speak too highly of our habit of juggling tasks. Thinking more about it, I believe we don't really multitask at the computer, since we are actually focusing our attention on one thing at a time. It is true, however, that our time is segmented more and more while we answer IM, receive an email, take a Skype call. There's still no consensus whether this is beneficial or just illusory, as we get the impression that we are getting more things done at the same time.

With that said, our brain is a massive parallel, mean multitasking machine. What do we multitask? Our movements, our perceptions, our internal activities, alerts (we're hungry, we're sleepy, and so on) are all managed by our brain without any of us spending any time concentrating on that. Plenty of things to manage by its own while we do the most mundane of tasks. I'll focus on the perceptions or sensory tasks that our brain does. More specifically, listening to music.

Many times where I see the complaint about multitask absence, I hear of Pandora streaming, Last.fm, or any other third party music application that users want. Which, if you really think about it, makes sense. Listening to music is, after all, one of the conscious tasks that our brain juggles pretty well with others in a low-stress situation.

The other type of multitasking I've thought about would be a reference-creation type of activity. We do these every day, and have been, before computers were around. We write an essay while having an open book in front of us. We paint a landscape on the canvas next to us. We write a document while surfing the web and we write code while looking at the documentation. Surely, you can multiply the number of open documents, web pages (I should know, I'm using up to 30 tabs open at a time) but the basic action will still be the same, reduced to two items: look up something in the reference, create something at the destination. An Alt/Command-tab away, information is at our fingertips. With two monitors, this setup is quite nice to work with. On the iPhone, however, where memory and display size are a premium, you cannot afford (yet) to keep two applications open. I will venture to say it comes very close to the "Alt-Tab" way of working. You pop-up Safari, open a website, hit the Home button, open the email/notepad app, with or without a copy/paste operation in between, then another tap of the home button, and you're back to where you were in Safari. "You do realize, Mircea, that the app is removed from memory when you hit the Home button! This is most definitely not multitasking!". I would argue at this point, it is just a technicality. iPhone apps, with session saving well implemented, pick up from where you left them. As an added benefit, you can even turn the device off, and the next day, they will again pick up from exactly where you left them, which you cannot say about too many of their desktop counterparts.

Whereas multitasking was seen as a benefit of keeping in "memory" several apps and avoiding a lengthy start of said software, "multitasking" on the iPhone requires that an App exits from memory, especially so that the other loads faster. And fast is a key word here. At the base level, CPUS still execute one instruction at the same time, and multitasking is an "illusion" given to us by the speed with which the CPU manages to run several operations (And yes, I know about multi-core/parallel processing. I'll talk about that in a future post).

The third kind of multitasking that we use every day is manifested through notifications. Email, IM, system updates, browser updates. For this part, I'm still on the fence. I like them while I'm browsing the web. I'm not particularly fond of them when reading a book. I'll have to explore this in more detail. What do you think?

Was it good for you too, then?

Apple took a risk and pulled a Nintendo with the iPad. The Nintendo Wii's story is much like the iPad's. Launched at the same time as its competitors, the XBox360 and the PS3, it was scarce on features. No High-Definition output, limited hard drive, poor graphics capabilities. Some even said it was "two Gamecubes (previous generation hardware) duct-taped together" and name jokes were in abundance. Sounds familiar? Yet Nintendo innovated in a key aspect. The controller, the way you interface with a console. By adding a few accelerometers, Nintendo managed a 1 to 1 mapping of human arm motion to the action reflected inside a game. All of a sudden, people who had no passion for gaming (as previously established until then) started playing tennis, bowling and a host of other games. Nintendo focused on simple games, simple graphics, simple motion controls. And they nailed it. They are the number one console this generation, despite hordes of gamers who predicted its demise. It opened gaming to an entire new demographic, while blissfully ignoring complaints and raking in the cash. Now, three years later, both Sony and Microsoft will launch this fall their responses; the Playstation Motion Controller and Natal.

Will this be the same pattern with the iPad? Will they, by focusing on ease of use, natural mapping of inputs and speedy, responsive interfaces open up computing to a new segment of users? Will I have to buy one for my mother so that I don't spend half an hour guiding her through a 4 clicks browser update?

Time will tell. While Apple has taken to the risky business of migrating the user interface paradigm we've been used to for 30 years, it will be a long journey to get everyone on board. That of course, if the iPad isn't doomed to failure and all these words were spent on nothing.

I'll give Apple till the third iteration to fix their shortcomings and bring multitouch to an OS oriented towards professional applications, that we, the techies, have been waiting for all along.
Don't disappoint me, damnit.

From TCP/IP to Wikipedia

Added on by Mircea Botez.

The amalgamated mass of links, photos, videos, meme's, social networks and sometimes useful knowledge that we call the Web is now 20 years old. At times I wonder at the pace we are going and how common-place are sites and habits that didn't exist up until several years ago.
Tools such as Wikipedia, web-based email, social networking sites and the all-mighty Google have transformed knowledge-sharing from a license of few into a commodity for many.
Creative communities exploded with the emergence of self-publishing, photo and video sharing sites. Flickr, Youtube, Vimeo have fostered communities of creatives and provided valuable lessons for community management on the web.
Several outlets agreed that the internet was a major factor for Barack Obama's victory in the US elections.
So how did we get here?

 

 

TCP-IP

What is TCP-IP and why is it important?
TCP-IP is the network protocol that is at the base of the internet. It’s efficiency and open design has been fundamental to the internet’s development. Assuming an inherently unreliable network, the protocol handles data corruption, lost bits of information and has many features that ensure communication is handled successfully.
Among the projects developed by the US during the cold war, one was the creation of ARPANET. The launch of Sputnik I convinced US President Dwight Eisenhower to create ARPA to keep up with the technological advances of the times. Lawrence Roberts was the project manager for the ARPANET project, aimed at providing a country-wide communications network based on J.C.R Licklider's vision.

ARPANET went live in October, 1969, using Network Control Protocol, with the first communications sent between Leonard Kleinrock's research center at the University of California and visionary Douglas Englebart at the Stanford Research Institute.

Wired, Happy Birthday, Dear Internet:

"NCP was sufficient to allow some Internetting to take place," said Kleinrock, now a computer science professor at UCLA. "It was not an elegant solution, but it was a sufficient solution."

ARPANET evolved, and many saw the need for a more efficient, open protocol that would anticipate further growth.

From http://livinginternet.com/:

The history of TCP/IP is like the protocol -- interdependent design and development conducted by several people and brought together as one.

Following the design of TCP/IP by Robert Kahn and Vinton Cerf [...] four increasingly better versions of TCP/IP were developed -- TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of 1978, and then stability with TCP v4 and IPv4 -- the standard protocol still in use on the Internet today.

The proverbial “flip” was switched on for TCP-IP in January 1st, 1983. It has seen 26 years in service.
Today, there is talk of migrating the existing infrastructure to IPv6, since the IPv4 address space (the number of unique IP addresses we can use) is reaching its limits. IPv6’s 2128 addresses (compared to IPv4’s 232) ensures that, as a friend of mine put it, “everyone on this planet will have enough IP’s for each cell in their body”. At the moment, unfortunately, adoption is still an issue.

In the late 80’s ARPA was overseen by the US Department of Defense. During this time, a host of corporations set out to increase network speeds and spread the deployment of supporting infrastructure. T1 to T3 lines are deployed and the network is released to corporations and the general public.

World wide web

In 1990, Tim Berners Lee published a proposal for the so-called “HyperText” project, also named WorldWideWeb. As Wikipedia notes, ”[the] World Wide Web is not synonymous with Internet. The Web is an application built on top of the Internet.”.
During their period at CERN, Berners Lee and Robert Cailliau set out to build the first web pages, first browser and first web server.
1993 marked two important announcements:
* CERN declared the WorldWideWeb free for anyone to use, licensed under non-proprietary protocols, making it possible for anyone to build their own servers and clients;
* Mosaic, the first graphical browser, was released by a team at NCSA at the University of Illinois.
These announcements marked an exponential increase in popularity for the Web. Though only a handful sites by today's standards, among them we find a search engine, a library of scientific documents, a web directory, a webcomic, a music archive and the Yahoo website. We can identify these as precursors for many of today's websites or services.
After his departure from CERN in 1994, Berners’ Lee founded the World Wide Web Consortium or W3C as a governing body over the open standards used in the development of the web. Over time, the consortium gained traction from all of the industry parties.
In March 2009, CERN celebrated 20 years of Web existence, with a speech from Tim Berners Lee, a demonstration of the original browser running on a NeXT computer and various other presentations.
By uniting HyperText and the Internet, focusing on standards and openness, Tim Berners Lee set another important building block in the foundation of what we now call the web.

Netscape and the browser wars

While there were many initiatives to create browsers based on Mosaic in the 1990's, some clear leaders emerged. Netscape Navigator had a peak value of over 80% of the browser market around 1995-96. One of its innovative features was displaying a web page while it was loading, making the text content available to users before the rest of the page items were fully downloaded. In dial-up times, this made a big difference in surfing the web.
Initially, Netscape released its browser free for non-commercial use, but then restricted free use to academic and non-profit institutions.
Netscape maintained it's technical superiority with the additions of cookies, frames, Javascript 2.0.

Wikipedia notes:

Industry observers confidently forecast the dawn of a new era of connected computing. The underlying operating system, it was believed, would become an unimportant consideration; future applications would run within a web browser. This was seen by Netscape as a clear opportunity to entrench Navigator at the heart of the next generation of computing, and thus gain the opportunity to expand into all manner of other software and service market.

In retrospect, they were definitely right. As it turned out, they were about a decade too early with this prediction. IT has proven many times that technical excellence is not the main driver for profit or marketshare.
Enter the dragon: Microsoft. Profitable IT markets never passed unnoticed by the software giant. They played catch-up to Netscape's feature-set up until Internet Explorer 5, which was faster than Netscape (now rebranded Communicator), had a better interface and much better CSS support.
IE added support for CSS1 which was ratified as a standard by W3C in 1996. XMLHTTPRequest, which would later help coin the "Ajax" term (more on that later), was also present since version 3.
Microsoft began offering Internet Explorer for free, bundled with Windows, holding at the time 90% OS market share. Windows 95 saw a tight integration between core OS components and Internet Explorer 5. Bolstered by Windows' popularity and the fact that it was present by default on users desktops, Internet Explorer went from ~5% to 30% (IE4), 50% (IE5), 80% (IE5.x) and finally 95% (all versions combined) with version 6, released in 2001. All of this, at the expense of Netscape, which also made it's share of bad business decisions.
Netscape's demise and dissolution as a company was part of the US versus Microsoft antitrust case in 1998.
Approximately 1000 people were working on Internet Explorer in 1999, as Eric Sink says, it was a team larger than the entirety of Nescape.

fatboy-slim-number1.jpg What follows is my opinion on the subject, after working for many years in web-development and fixing IE6 issues. As close friends of mine can attest, it's a touchy subject. As of this writing, the Browser Wars Wikipedia article is marked with neutrality disputes, cleanup requests and original research claims. During 2001 - 2006, IE6 reigned supreme over the web browser market. As soon as Microsoft reached 95% saturation, it patted itself on the back and reduced development on IE6 down to a critical level. No notable features were developed in this time, only a trickle of security updates. As Fatboy Slim once said, "I'm number 1, so why try harder?"

Losses from the "good guys" had more impact than anyone could have foreseen at the time. Web innovation crawled to a standstill, with developers busy developing large sites which worked only in Internet Explorer, with no regards to standards. IE6 was left "unfinished", riddled with implementation bugs, random behavior, poor performance, security issues and incomplete standards implementation.
Want to make an older web-developer cry? Tell her/him you're still using IE6. There are entire sites dedicated for listing bugs in IE's rendering engine. People have started campaigns (Stop IE6, Bring down IE6, a Norwegian Anti-IE6 campaign, a wiki documenting these efforts and a Facebook group). Part of me wanted to write this article only for documenting this and referring future "Why do you hate IE6 so much" questions to this part. But I digress...

There are some who state that in the end, Microsoft didn't win anything from the browser wars, only lost money. In fact none of Microsoft's web sites became successful enough to recover development costs that were spent on the browser. It could be argued that the huge intranet applications that were developed during this time provided a safe lock-in for Microsoft, leaving large corporations unable to switch to other operating systems because of the large (>6 figures in many cases) investment in such web-applications that would only run in Internet Explorer 6.
However, I think whatever advantage this created was offset by the antitrust trial, loss of public good faith, and ironically, resistance from the same corporations to upgrade to latter versions of Windows.

I believe Microsoft missed a huge boat here. They were absolute leaders in browser marketshare by the time Windows XP shipped. Had they continued with completing CSS2.1 support, CSS3 support, improving the rendering engine and in general listening to people's gripes with their browser, they would have lead the web in innovation and probably garnered a lot of respect from web developers.
What do we have instead? Lots of hate from web developers, increased development costs for supporting IE6, bloated and poorly coded legacy sites, hacks and thousands of development-hours wasted on working around implementation bugs.
I suspect that a Microsoft meeting went something along these lines:
IE Project manager: Ok now, we have totally obliterated Netscape!
IE Team: YEAH! High five!
MS Higher management: So now ... PROFIT??
IE Project manager: Umm we actually have no idea how to turn profit out of it... We went all out trying to outdo our competitors...
MS Higher management: No money? No more development!
IE Team: But sir, the web? The innovation? Think of the developers!
MS Higher management: What was that? I thought I heard some low-level employee waste company time...

Why were the browser wars important? Browsers are the gateway to the web. The more standards compliant they are, the more development features they support enables developers to create great web applications, great services for everyone. If it takes less effort to fix hacks and incompatibilities, we can spend more time on actually building great things. So let's take a look at what was built so far on the web.

Yahoo and the emergence of Google

Yahoo is one of the oldest sites still in business. Jerry Yang and David Filo started out Yahoo as a simple directory of links, maintained by hand, in 1994. As the expansion of the net grew, new services were needed to organize the large amount of data.
Several good expansion moves and a couple of financing rounds later, Yahoo became one of the most visited sites in the world, lagging only behind AOL. What followed was a series of acquisitions and mergers which eventually became Yahoo GeoCities, Yahoo Games, Yahoo Mail, and a series of sections on it's portal homepage with large amounts of content from different content providers.

Wikipedia states that Yahoo is the second most visited site:

[...]the domain yahoo.com attracted at least 1.575 billion visitors annually by 2008. The global network of Yahoo! websites receives 3.4 billion page views per day on average as of October 2007. It is the second most visited website in the world in May 2009.

The 15 years old company has had a convoluted history with product acquisitions, site shutdowns, talks of mergers with eBay, Google and lately, with Microsoft. They are eloquently described on it's Wikipedia page, so I will not enumerate them here. I'm glad they are still strong as they are one of the early Internet pioneers. My second email address is a Yahoo one, it was the first one I created after being given one in high-school. I have it since 1998 and I still use it today. My first web presence was on Geocities (a teenage-inspired atrocity, good thing Geocities was shut down).

As mentioned above, Yahoo is the second most visited site in the world. I don't think it's a doubt in anyone's mind who is number one. google-logo1998.pngGoogle, circa 1998

By the time I was registering that email address, two enterprising youngsters founded Google Inc in Menlo Park, California. I was reading in a (printed) newspaper sometime in early 2000 a comparison between search engines. The paper listed familiar names such as Altavista, Lycos, WebCrawler and Northern Light with indexes of 150-200 million webpages. Sure, I was using them all the time in my discovery of the net at that time, with a faint recollection of fondness towards Northern Light.
In a corner of the article I caught Google's name with another number: 500 million. I remember quite clearly my initial disbelief that such a large number of pages even exist on the internet. Nevertheless, more is better, right?, so from that day I started using Google.
Growth of information determined the need for better organization and search of existing information. Larry and Sergey came up with the (undoubtedly brilliant) PageRank algorithm. Google's index jumped to 1, 2, and I think I lost track of it somewhere at 8 billion pages.
Google's speed and accuracy were unmatched. Attaching itself to the simple link, it allowed everyone to reach into the farthest depths of the web, in the smallest amount of time. After a couple years, they grafted onto their spartan home page ads which would appear next to targeted search items. Starting with 350 customers, it literally exploded into the search advertising giant it is today.
Their history is a clear evidence of the exponential growth rate they had every year, for 11 years.
Here's is that history in animated form:


A few highlights: Klingon is introduced as translation of the Google interface; PigeonRank is revealed; several scholarships are introduced; there is a huge involvement in open source; Google is named best company to work for in the US several years in a row. With very young founders Google paved the way for many innovations both in the business areas as well as the technical areas. They were among the few to pioneer the death of the cubicle, informal working spaces, the famed 20% time, massage services and many other unusual practices until Google made headlines with them. Of course, no company is perfect and they have their share of criticisms, but I think they have shown that creativity and openness can be embraced in a large-scale IT operation. I will touch on the technical innovations later in the article.

Wikipedia and Open source

Though the term Web 2.0 appeared in 1999, one of the first sites embodying "the ether through which interactivity happens" was started in 2001. I believe Wikipedia to be one of the most interesting social experiments of our civilization.
What if someone told you they are starting an online encyclopedia which can be edited by anyone in the world, at any time, with no formal approval process, and the site will run on donations? Surely, you'd say, that is overly idealistic? That it will only last a month? That vandalism will be rampant and you'll be losing money to keep the site up?
13 million articles, 8 years of collaborative, anonymous editing and review, Wikipedia ranks among the top ten most visited sites on the internet. And yes, it still runs on donations. Do your part.
For me, Wikipedia is one of the few things which still give me faith mankind will make it after all. As a purely voluntary effort, it is quite an achievement.
As part of the web's evolution, it marked an important step in proving that there is a place for community-based, user-generated content, that websites don't get crushed if they release some (or all) of the reins to their users.
A like-minded initiative called open source was coming of age by the time Wikipedia launched. Though started in the early 80's by computer enthusiasts under the "free software" term, it came into prominence in 1998, with the release of Netscape Navigator's sources and the rechristening to "open source".
While Wikipedia is an open-source document, Linux was to be the kernel of an entire open-source Operating System. What was released in 98 as Navigator was reborn in 2005 as Firefox. Millions of sites are being served by Apache server, there are hundreds if not thousands of distributions of Linux and open-source frameworks are becoming the preferred choice among corporate IT. The development of TCP/IP, ARPANET is regarded by some as an early example of open-source, collaborative, standards-based development. The LAMP stack provided a low cost of entry for web-development and hosting. If it weren't for the open source movement, the Internet landscape would look very different today. Just think about this: 66% of the 1 million most busy sites run on Apache, and many of them run the full LAMP stack. Many of the sites you visited today are probably running open source code.

All of these stem from two natural tendencies among humans, sharing and fostering communities.
People have gathered around communities since the dawn of time. It was shortly realized that the odds of survival were higher when in a group. While the urgency of survival has diminished in this context, we still have in our DNA the tendency to form a group and organize a community.

Why this blog?

Added on by Mircea Botez.

Because I care about technology, the future and human evolution.
I believe the web is guiding our path there, fueled by innovation, marking our progress.

The future of the web is the future of our civilization. It is our collective fountain of wisdom, our interconnection of services and actions, our logger of events, the keeper of our data.

We must nourish it, protect it and help it develop itself. Help it mature. We need to understand it, as well as make sure it will understand us. Because when we will be old enough, it might just take care of us.

How do I think we'll get there? Sharing, standards, innovation, design.
In this blog, you'll read about web technologies, web companies, user interfaces and interaction, bits of design, game development and my accompanying thoughts on these subjects.