Mobile Cities

Few days ago the tech news world was been dominated by the Amazon’s Whole Food acquisition. Different sources speculated around the strategic value Whole Food has in Amazon’s current business, as the Seattle company has been already experimenting on grocery from a delivery perspective (AmazonFresh) and location (AmazonGo). Where does Whole Food stays? Reading this article in Wired I sense that the company is looking for a wider spectrum that possibly aims to join the physical and the digital world, by means of people, cities and infrastructure. The Wired article points out that AmazonFresh has a delivery problem: i.e. even though fresh vegetables are delivered to you in instants of (Amazon) time, there is a “pick up” issue. Customers are not always available for pick up, thus making any purchase ready for bin. With AmazonGo the company made physical the digital web experience. By removing “human employees”, customers can stroll around the shelves and leave with no queue; the app identifies individuals, and their shopping list, thus making the payment touchless. However the problem is that customers actually need to go to the grocery store; the convenience of having custom deliveries falls, which has been Amazon strength since the beginning.

Moby Mart is an automated grocery store, which “satellites” around the city. Imagine a food truck, but with some embedded AI, which can read on demand customers’ needs and keep food fresh until needed. Of course the project is only a prototype, but it is very fascinating in terms of the conversation it opens on automated mobility, urban design and digital/physical infrastructure. Imagine that still becomes obsolete and shops, services, etc are able to move around the city like Uber. A dynamic urban space, which deletes centuries of urban planning and creates different kind of places, which parameters are no longer defined by residence/green/retail/industry etc but on the relationship people make around the physicality of a place.

I might be quite optimist and, of course, I am concentrating only on the positive opportunity I can see emerging. However the possibility of creating a city based on human networks, which become triggers of social, economical, leisure, political opportunities sounds exciting.

The near future looks like the Plug-In City  Archigram envisioned in 1960s. The innovative perspective the dynamic Moby Mart scenario offers is the elimination of the shell, the physical confined space that creates the building/city, for a mobile, fluid and dynamic space of interactions, like digital networks already achieve in digital space.

Advertisements

Architecture that builds values

 The Economist 1843 Magazine ‘s  article, ” Versailles in the Valley”, frames quite well the current trend of digital corporations – like Facebook, Google and Apple – in building headquarters which represent the brand values. Versailles was the palace that Louis XIV built to centralise his power through parties and events that entertained Parisian aristocracy; the Versailles in the Valley symbolises a similar status. Facebook, Google and Apple campuses are palaces, which make tangible the politics of the brand. Whether sustainability, sharability, “open source”, etc brand palaces look after the physicality of the images that makes them real (it’s a kind of skeuomorphism). If in the past values were represented by statues carrying specific symbols (snake, flames, mirrors), nowadays building are asked such role. The way the building is experienced from a human perspective, materials and human interactions are factors that represent the company. They are not random; they come from society. However there is nothing new in this methodology; building monuments had been a political strategy that leaders from the past knew very well; if in the V century BCE Pericles gave shape to democracy by building Athens, Mussolini designed Fascisms through Rome urban planning, which extended to the whole Italian peninsula. Apple, Facebook and Google campuses (the word campus is already controversial in this specific context) are media that gather users’ imagination. They are tangible outcomes that shape digital intangible interactions. As drivers of people’s imagination, they enable transferability of something universal (as values are) to something specific to the company. Will community be understood as Facebook? There is also another effect; values can buy people’s trust if the message reaches the audience. If one of those values, which I believe in, becomes the company’s one I trust the company as we share the same values. As consequence I trust what the company does, without questions, which is a risk for my criticism and ability to make choices.

I guess the challenge we need is to keep universal words as universal, and avoid any specific identification that might lead to an even more constrained world of thinking and find our own solution, credo and ability to articulate our thought independently. We need to handle our trust carefully. We design our lives through our choices. Our actions and decisions make a huge difference in society; being responsible of those is our own priority.

The Design of the City

Our urban environment has never been so fluid than now. From hailing-based businesses, migrations and shift of urban identities, environmental issues, political boundaries and sovereignty claims to data and autonomation, just to name few, the understanding of physical space needs to combine fluid and overlapping information, which do not stay still, but adjust their mutual impact according to local conditions. Urban space is a kind of entropic environment, which agents loop and, by looping, create many different territories, which influence may last seconds or ages, that affect the next iteration.

Within this landscape the question of urban design becomes a challenge. Which are the data to include? Which parameters we need to look at? Which behaviour we should analyse? There are many questions to be addressed, each one with its own complexity, which makes any strategic planning of any urban environment a challenge. One of the most interesting debates on the topic took place in Quito, last October 2016. The UN Habitat conference started a new conversation. By looking at CIAM, and the design guidelines urbanists draw in 1933, Ricky Burdett, Saskia Sassen and Richard Sennett’s conference presentation analysed the human value of those points and the impact they had on the city. The Quito Papers depict urban space as a territory that stages the life of its people. From Saskia Sassen’s “Who owns the City, to Richard Sennett’s “Open/Pourous City” and Ricky Burdett’s accent on the value of design in urban planning the conversation places at the foreground the quality of urban space, the streets that people walk on, that people dwell and occupy; the streets that build urban life. I agree, streets and the life that passes through them and gets transformed, are one of the most interesing aspects to observe to understand patterns of human life.

The autonomous car will be soon the way we move. What does it mean for the street? Owing a car will be possibly replaced by hailing on need. People will move for different reasons, if services are provided by digital infrastructure, which, on its turn, provides a series of sub-infrastructures. The working environment might change too. People will commute for different reasons, at different times. The attentions towards sustainability, and thinking a city as a metabolic system, which energy can be transformed throughout its living organs, can make a huge impact on the people who live the city, because the everyday will change. Which would be the daily routine? How and where people will meet? How the space of the city will react, adapt and transform to different fluxes of people?

To design cities according to fix parameters, which foretell economical trends, then growth, looks no more feasible. Google AI Peter Norvig describes AI “coding” as a work in progress methodology that needs to take into account dynamic patters that adapt and learn from entropic and temporary conditions. At a bigger scale, and bigger problems, the city needs to take into account dynamic information and try to understand how its infrastructure might react and adjust to enable behaviour.

Behaviour is produced by the human factor; people make the city, the way they life, they meet and work create the territory for urban life. People are the central value of urban design. This Wall Street Journal article on the hailization of services in the South East Asia demonstrates how trends adapt to the culture. Any innovation confronts territorial resiliency. Any innovation needs to face local culture, i.e. the way people understand their life, to create its own territory. With innovation I include strategic patterns that influence they way things flow. Indeed the city should confront its own people, and provide them the temporary opportunity capable of relating diversities that, together, design the next move.

Piazza 3.0

The brand new AmazonGo is a great metaphor of the state of our the real, digital and physical. The detail that Amazon caught quite well is that, indeed, the physical and the digital look as part of the same “whole”. When we describe our interactions with the digital we quite often make a distinction from the physical. AmazonGo represents that this is not true; our interactions with technology tell a different story. To be in the digital is equal to be in the physical; from social interactions, jobs, getting things done, etc. The Seattle based company found, and well combined together, the technological infrastructure to make this happen.

Amazon understands that humans are made of bones, and they like stuff; stuff you can show, share, touch. Even though you make your shopping online you do like the thing. There is not any VR that can generate the same satisfaction of buying a very cool brand new pair of trainers and show them to friends in (at) Instagram or at the pub. The bound we have with stuff is ontological. I don’t believe there is any technology capable of replacing such bound. Even though VR engages the body by simulating other senses – like smell and touch – our physical relationship with our stuff wins. Maurizia Boscagli’s book “Stuff Theory” frames quite well such relationship.

On the other hand the possibility that AmazonGo opens relates to the way we interact with people and space. What can the retail world learn from this? Is it only about retail or it can also extend to our house, place we work, exhibitions we visit, etc. ? What is the opportunity that our everyday space can take from it?

The reason why I used the word ontological to describe our relationship with stuff is because we associate a “human” value to the things we own. Once we get possess of our stuff, whether home or shoes, we assign a value. Value is not universal and it’s not about the stoke market. It is the literal human quality things have for us. It is related to the memories we associate to the object, the kind of experience the object represents to us. There is an embodied process of events encoded in the objects we own. I think it is not projected, as Walter Benjamin described in the Arcade’s Project. What does this mean for our everyday infrastructure? What does it mean for our experience of the physical/digital world? What can the AmazonGo model trigger and generate in terms of the physical experience we have with humans and things? Which consequences are related to the use of technology to smooth, and blur, our digital/physical interactions with humans and things? I believe these are questions to address in order to generate new forms of social opportunities. Where “people” should be?  Is it about a special meal you want to cook for a special occasion? Is it about joining a talk of a new book?

The over celebrated model of the Italian piazza was at the beginning a market. People met for a reason. There was an embodied system of exchange that called other factors, which over time became what we know as “piazza”. What is the piazza3.0?

 

The Language of AI

In this article from the Harvard Business Review it is recommended to not swear at any form of AI; as they are learning from us it may cost our career. In other words we should start treating AI with respect. Please do not use inappropriate language and think them as kitties.

With Tay Microsoft had quite an experience in learning what happens if you leave your AI follow Internet trends. Humans, we know, are not always nice. In particular the Internet gives us many examples of how human interaction is not always for the good of knowledge.

I would like to reflect on another point though, which calls AI speech improvements. One of the best features Google Pixel offers is Google Assistant. Assistant learns from you and your interaction with the phone, hence the world around you. By learning your behaviour Assistant can anticipate your actions, can join your conversations and interface with third parties like Uber. Google AI relies on an improved “understanding” of human-like thinking and language. As its human resolution gets better you might end up establish an empathic relationship with your AI and treat it as human.

Nonetheless do we need to create different kinds of humans? What can they offer to us, more than mimicry our actions to the point of believing them alive entities? Chatbots are currently used to replicated our beloveds when they pass away, by learning “language styles”. What is the ontological social role, and value, of AI? Do we want them to give us immortality? Do we want them to replicate us? Do they need then to develop human empathy? For which reason? I suppose one way to analyse the context is language. Language, indeed, is the first human vehicle, whether written or spoken, that helps with establish relationships. We do need a form of language to establish any form of connection with the other party. As AI navigate the blurred threshold of quasi-human, as we do, we can acknowledge their “being”, hence their social presence, by giving them a language. Such action blurs the human-AI threshold and makes us, human, look like machines. Is this what we want?

On another hand can machine have their own language, based on the skills and opportunities they cab open for us to live a better world? By changing the way they speak I suppose human perception and understanding of AI might take another route and open different kinds of opportunities for human-machine collaboration.

It’s all about context

Semantic search seeks to improve search accuracy by understanding the searcher’s intent and the contextual meaning of terms as they appear in the searchable dataspace, whether on the Web or within a closed system, to generate more relevant results. Semantic search systems consider various points including context of search, location, intent, variation of words, synonyms, generalized and specialized queries, concept matching and natural language queries to provide relevant search results. Major web search engines like Google and Bing incorporate some elements of semantic search. (from Wikipedia)

For the common good we should get familiar with semantic search, as it will soon change the way we acquire knowledge and learn. In this article in Forbes it is illustrated a a very interesting perspective on how our interaction with recent AI based technology is shifting our methodology of learning. The metaphor which I think works the best, as explained in the article, is to consider current search engines as funnels. We search for something and get a narrowed list of answers. The more the search engines knows about us, the better the answer is accurate. However human brain is great on pattern recognition; we are capable of linking different elements together – which includes our own background, our expectation, our bias – that affect what we are looking for. In the book Diffusion of Innovation Everett M. Rogers defines information as a process through which we reduce uncertainties when in an unfamiliar context. In other words we search to get a better picture of the context; we are not naturally looking for a specific answer, but an answer located in a context that makes sense for us.

Hence Google knowledge graph-based search. It is a search process, introduced in 2012, which looks at the context, at the user, at the location, the language, i.e. the context the search happens.

However when we speak to Siri, Google Now, Cortana, Alexa, etc we don’t get anymore a series of links, but a single answer. We are no longer allowed to select what we want to read, either the source. According to the Forbes article this system will reshuffle the information business.

Nonetheless in this post I would like to focus more on education and research. Looking for different kinds sources, connecting/disconnecting them, prooving the “truth” are actions at the core of research and knowledge. If we have a single platform that provides answers, will we be still entitled to understand where information comes from? My provocation to this is the following: context and mapping.

My guess on how human being will still find ways to be curious, investigative, to challenge assumption and axioms (for Google  we would be probably still living in a flat earth, if the answer was provided by algorithms that assemble information) will be the skills on understanding and compare different contexts and map answers in a bigger picture. Basically flipping the way Google intends to map its search engine and use it for exactly the opposite. We may be able to funnel research the other way around.

 

When Design is the “Shell” of Technology

One of the projects exhibited at the Oslo Architecture Biennale – which is described in The Guardian – tells the story of Mark and the experience he provides through Airbnb. Mark’s homes stage family everyday living. You will find family’s pictures and anything that will satisfy your imaginary of renting a family home. Well, it’s all fake. Mark’s business hacked the Airbnb’s keystone value: dwelling the everyday of anybody’s home with all the memories, artefacts and memorabilia that each of us collect along our life.

Airbnb’s strategy, indeed, uses the human’s perceived meaning of intimacy into a business value (which Mark flipped into the core of his business). The more the host makes you feel home, the more the accommodation will provide the experience – and good rating- you are expecting to live visiting the place, whether you ever been there or not. Intimacy is no longer a private sphere of our being, which takes shape through a series of objects we relate to. Intimacy is something you can sell. Your life goes on market (and rated), as much your image does with selfies.

Airbnb is not the only company “looking after” people’s interiors – with the collection of objects and memories; Amazon and Google are also on the same page. Amazon Alexa is indeed an artificial intelligence capable of sensing the environment. Alexa learns from you, about your taste, what you read, the music you listen, the place you visit, the friends you see,.., the list is quite long; Alexa absorbs your life, so that it can “suggest” Amazon what to suggest to you. Whether in Airbnb your intimacy mimics the social masks you need to wear to perform the character your house is placed in (romantic, modern, family, etc), Alexa moulds the character (you indeed).

Similarly Google is shifting its business approach by changing what made them very successful: search engine. According to this article in the MIT Technology Review Google is ready to introduce Assistant to the public. Assistant is a “third person” that reads you and the environment you are in (physical and digital) to make suggestions. The ambition is to turn Google search from a general page you can type in to a custom, interactive character that suggests information, whether asked or not. Assistant can enter a conversation you are having with friends and make suggestions on the topics of discussion.

Alexa, Assistant and Airbnb make design the Shell (under Venturi, Scott Brown, Izenour’s perspective) of technology, at different scale of course. What does design propose more than decorating technology’s performances (both aesthetically and technically)? Is there any value that design adds, besides embodying sensors that can connect you to the Internet? Interiors and products are interfaces at different scales that provide information. We interact with spaces and objects through algorithms that “learn” our behavior to loop information back to the private company, then us. What we achieve is a chewed digested information. If interiors will be probably designed to satisfy the best AI scanning (as currently shopping mall are designed to give shops the most of visibility) and objects to keep us “busy”, what can design do? Probably I need to define what I mean with design. The human passion for making and working with materials, thinking about mechanism, sorting problems, satisfying needs. Does design still performe a service to society?