When Design is the “Shell” of Technology

One of the projects exhibited at the Oslo Architecture Biennale – which is described in The Guardian – tells the story of Mark and the experience he provides through Airbnb. Mark’s homes stage family everyday living. You will find family’s pictures and anything that will satisfy your imaginary of renting a family home. Well, it’s all fake. Mark’s business hacked the Airbnb’s keystone value: dwelling the everyday of anybody’s home with all the memories, artefacts and memorabilia that each of us collect along our life.

Airbnb’s strategy, indeed, uses the human’s perceived meaning of intimacy into a business value (which Mark flipped into the core of his business). The more the host makes you feel home, the more the accommodation will provide the experience – and good rating- you are expecting to live visiting the place, whether you ever been there or not. Intimacy is no longer a private sphere of our being, which takes shape through a series of objects we relate to. Intimacy is something you can sell. Your life goes on market (and rated), as much your image does with selfies.

Airbnb is not the only company “looking after” people’s interiors – with the collection of objects and memories; Amazon and Google are also on the same page. Amazon Alexa is indeed an artificial intelligence capable of sensing the environment. Alexa learns from you, about your taste, what you read, the music you listen, the place you visit, the friends you see,.., the list is quite long; Alexa absorbs your life, so that it can “suggest” Amazon what to suggest to you. Whether in Airbnb your intimacy mimics the social masks you need to wear to perform the character your house is placed in (romantic, modern, family, etc), Alexa moulds the character (you indeed).

Similarly Google is shifting its business approach by changing what made them very successful: search engine. According to this article in the MIT Technology Review Google is ready to introduce Assistant to the public. Assistant is a “third person” that reads you and the environment you are in (physical and digital) to make suggestions. The ambition is to turn Google search from a general page you can type in to a custom, interactive character that suggests information, whether asked or not. Assistant can enter a conversation you are having with friends and make suggestions on the topics of discussion.

Alexa, Assistant and Airbnb make design the Shell (under Venturi, Scott Brown, Izenour’s perspective) of technology, at different scale of course. What does design propose more than decorating technology’s performances (both aesthetically and technically)? Is there any value that design adds, besides embodying sensors that can connect you to the Internet? Interiors and products are interfaces at different scales that provide information. We interact with spaces and objects through algorithms that “learn” our behavior to loop information back to the private company, then us. What we achieve is a chewed digested information. If interiors will be probably designed to satisfy the best AI scanning (as currently shopping mall are designed to give shops the most of visibility) and objects to keep us “busy”, what can design do? Probably I need to define what I mean with design. The human passion for making and working with materials, thinking about mechanism, sorting problems, satisfying needs. Does design still performe a service to society?

 

The Duality of Human Being

Perception is a topic I am investigating from a philosophical point of view to understand the human concept of space, as something that is “detached” from our ontological, then phenomenological, understanding of being. In other words we understand “space” as entity outside our body. We then create forms and shapes in relation to the memories we build along our life.

Nevertheless such a basic concept, which helps to acknowledge and reify the “real”, is at stake in the context of the intelligence that machines are acquiring and developing “against” humans.

Perception is something that appears to be mainly guided by sight, even though we get a sense of the physical real by means of senses, which feedback our mind through concepts like depth.

Google Project Tango, and similar, are projects which give machines the understanding of depth. By collecting information from physical reality machines can learn what depth means, as much as human thinks. Google DeepMind is moving beyond the direct understanding of depth by teaching machines the knowledge of space, which is intended as a sequence of “depths”, represented by patterns created by crowds of images.

The interesting way by which this process takes shape is via video games. What is the reason? Space. Indeed according to this Wired article video games are the means to teach machines the concept of spatial navigation (hence the acknowledgement of the “sequence of depths”). Machines create the concept of space by associating patterns of sequence of images.

The process imitates the way children learn. In other words we are feeding machines of spatial information, which is derivative of physical space. Machine learn physical (?) reality within the digital domain of the Internet.

Would that generate a completly new form of space, of course as understood by analog human? Human understating of space is per se noumenal; it works by a series of associations given by memories, i.e. real events that thaught us lesson we will never forget.

I am intrigued when the two process of learning, the human analog and machine digital, are interfaced.

Perhaps Ridley Scott’s Blade Runner and Lugi Pirandello’s “Late Mattia Pascal” had a good intuition on pointing out the value of memories as detector of humanity.