The Design of the City

Our urban environment has never been so fluid than now. From hailing-based businesses, migrations and shift of urban identities, environmental issues, political boundaries and sovereignty claims to data and autonomation, just to name few, the understanding of physical space needs to combine fluid and overlapping information, which do not stay still, but adjust their mutual impact according to local conditions. Urban space is a kind of entropic environment, which agents loop and, by looping, create many different territories, which influence may last seconds or ages, that affect the next iteration.

Within this landscape the question of urban design becomes a challenge. Which are the data to include? Which parameters we need to look at? Which behaviour we should analyse? There are many questions to be addressed, each one with its own complexity, which makes any strategic planning of any urban environment a challenge. One of the most interesting debates on the topic took place in Quito, last October 2016. The UN Habitat conference started a new conversation. By looking at CIAM, and the design guidelines urbanists draw in 1933, Ricky Burdett, Saskia Sassen and Richard Sennett’s conference presentation analysed the human value of those points and the impact they had on the city. The Quito Papers depict urban space as a territory that stages the life of its people. From Saskia Sassen’s “Who owns the City, to Richard Sennett’s “Open/Pourous City” and Ricky Burdett’s accent on the value of design in urban planning the conversation places at the foreground the quality of urban space, the streets that people walk on, that people dwell and occupy; the streets that build urban life. I agree, streets and the life that passes through them and gets transformed, are one of the most interesing aspects to observe to understand patterns of human life.

The autonomous car will be soon the way we move. What does it mean for the street? Owing a car will be possibly replaced by hailing on need. People will move for different reasons, if services are provided by digital infrastructure, which, on its turn, provides a series of sub-infrastructures. The working environment might change too. People will commute for different reasons, at different times. The attentions towards sustainability, and thinking a city as a metabolic system, which energy can be transformed throughout its living organs, can make a huge impact on the people who live the city, because the everyday will change. Which would be the daily routine? How and where people will meet? How the space of the city will react, adapt and transform to different fluxes of people?

To design cities according to fix parameters, which foretell economical trends, then growth, looks no more feasible. Google AI Peter Norvig describes AI “coding” as a work in progress methodology that needs to take into account dynamic patters that adapt and learn from entropic and temporary conditions. At a bigger scale, and bigger problems, the city needs to take into account dynamic information and try to understand how its infrastructure might react and adjust to enable behaviour.

Behaviour is produced by the human factor; people make the city, the way they life, they meet and work create the territory for urban life. People are the central value of urban design. This Wall Street Journal article on the hailization of services in the South East Asia demonstrates how trends adapt to the culture. Any innovation confronts territorial resiliency. Any innovation needs to face local culture, i.e. the way people understand their life, to create its own territory. With innovation I include strategic patterns that influence they way things flow. Indeed the city should confront its own people, and provide them the temporary opportunity capable of relating diversities that, together, design the next move.

Advertisements

The Language of AI

In this article from the Harvard Business Review it is recommended to not swear at any form of AI; as they are learning from us it may cost our career. In other words we should start treating AI with respect. Please do not use inappropriate language and think them as kitties.

With Tay Microsoft had quite an experience in learning what happens if you leave your AI follow Internet trends. Humans, we know, are not always nice. In particular the Internet gives us many examples of how human interaction is not always for the good of knowledge.

I would like to reflect on another point though, which calls AI speech improvements. One of the best features Google Pixel offers is Google Assistant. Assistant learns from you and your interaction with the phone, hence the world around you. By learning your behaviour Assistant can anticipate your actions, can join your conversations and interface with third parties like Uber. Google AI relies on an improved “understanding” of human-like thinking and language. As its human resolution gets better you might end up establish an empathic relationship with your AI and treat it as human.

Nonetheless do we need to create different kinds of humans? What can they offer to us, more than mimicry our actions to the point of believing them alive entities? Chatbots are currently used to replicated our beloveds when they pass away, by learning “language styles”. What is the ontological social role, and value, of AI? Do we want them to give us immortality? Do we want them to replicate us? Do they need then to develop human empathy? For which reason? I suppose one way to analyse the context is language. Language, indeed, is the first human vehicle, whether written or spoken, that helps with establish relationships. We do need a form of language to establish any form of connection with the other party. As AI navigate the blurred threshold of quasi-human, as we do, we can acknowledge their “being”, hence their social presence, by giving them a language. Such action blurs the human-AI threshold and makes us, human, look like machines. Is this what we want?

On another hand can machine have their own language, based on the skills and opportunities they cab open for us to live a better world? By changing the way they speak I suppose human perception and understanding of AI might take another route and open different kinds of opportunities for human-machine collaboration.

It’s all about context

Semantic search seeks to improve search accuracy by understanding the searcher’s intent and the contextual meaning of terms as they appear in the searchable dataspace, whether on the Web or within a closed system, to generate more relevant results. Semantic search systems consider various points including context of search, location, intent, variation of words, synonyms, generalized and specialized queries, concept matching and natural language queries to provide relevant search results. Major web search engines like Google and Bing incorporate some elements of semantic search. (from Wikipedia)

For the common good we should get familiar with semantic search, as it will soon change the way we acquire knowledge and learn. In this article in Forbes it is illustrated a a very interesting perspective on how our interaction with recent AI based technology is shifting our methodology of learning. The metaphor which I think works the best, as explained in the article, is to consider current search engines as funnels. We search for something and get a narrowed list of answers. The more the search engines knows about us, the better the answer is accurate. However human brain is great on pattern recognition; we are capable of linking different elements together – which includes our own background, our expectation, our bias – that affect what we are looking for. In the book Diffusion of Innovation Everett M. Rogers defines information as a process through which we reduce uncertainties when in an unfamiliar context. In other words we search to get a better picture of the context; we are not naturally looking for a specific answer, but an answer located in a context that makes sense for us.

Hence Google knowledge graph-based search. It is a search process, introduced in 2012, which looks at the context, at the user, at the location, the language, i.e. the context the search happens.

However when we speak to Siri, Google Now, Cortana, Alexa, etc we don’t get anymore a series of links, but a single answer. We are no longer allowed to select what we want to read, either the source. According to the Forbes article this system will reshuffle the information business.

Nonetheless in this post I would like to focus more on education and research. Looking for different kinds sources, connecting/disconnecting them, prooving the “truth” are actions at the core of research and knowledge. If we have a single platform that provides answers, will we be still entitled to understand where information comes from? My provocation to this is the following: context and mapping.

My guess on how human being will still find ways to be curious, investigative, to challenge assumption and axioms (for Google  we would be probably still living in a flat earth, if the answer was provided by algorithms that assemble information) will be the skills on understanding and compare different contexts and map answers in a bigger picture. Basically flipping the way Google intends to map its search engine and use it for exactly the opposite. We may be able to funnel research the other way around.

 

What is real Reality? AI might tell us

Last year I went to the National Gallery in London to see Francisco Goya’s exhibition. I did enjoy the painter’s mastery on giving human character to his portraits by balancing the relationship between the background, often solid colour, and the subject. Goya doesn’t draw a border between the two; he blurs the boundary so that the subject emerges from the background. Such simple operation gives a sense of the character’s personality; the balance of colours – and chiaroscuro – that gradually progress from the background to the the face and the body returns to me (the interpreter) the experience of the subjects’s personality.

Current research on AI is moving towards giving machines a sense of space, by teaching them what is space (as we do). Through Deep Learning machines are growing the sense of reality. They are developing a form of knowledge that is capable of understanding objects in real space, by means of image pixelation. In this article from MIT Technology Review it is described how machines are capable to detect physical objects via digital images; pixelation is the language they employ. The differential between the background and the given object is indeed in the focus of attention. On the opposite of the poetic skill Goya used to give a sense of the subject’s personality, the understanding of “border” is the key element used to teach machines space. The intention is to teach machines to “see”, and I suppose think, like us. Digital image pixelation is the vehicle that machines use to understand the real as we do.

What is the real?

Quoting Slavoj Žižek’s:Every field of “reality” is always-ready enframed, seen through an invisible frame. The parallax is not symmetrical, composed of two incompatible perspectives of the same X: there is an irreducible asymmetry between the two perspectives, a minimal reflexive twist. We do have two perspectives, we have a perspecive and what eludes it, and the other perspectives fills in this void of what we could not see from the first perspective“.

In other words we, human, don’t see things as our eyes do. There is a gap in between that constructs our sense of the real. As quoted from Žižek, there is a void in between that we fill with our imagination. Imagination is a form of expectation of the real, which is linked to our past experience that, in our mind, has been stored in the form of memory.

How can such a random and complex fluctuation be translated to a machine? What we call “real” is nonetheless a specific frame of our perception, which doesn’t make any distinction between digital and physical, as everything gets stored in our mind in the form of experience. It kinds of makes me think back to the Ridley Scott’s Blade Runner, which machines desperately need pictures to be acknowledged as humans.

 

Žižek, Slavoj (2006), The Parallax View, Cambridge MA: MIT Press

The Middle Age of Digital Technologies

In history we codify the passage between Middle and Modern Age when an Italian explorer landed what he thought to be the East Coast of Asia. 12 October 1492 is the official beginning of modernity, which that also marks the time when planet Earth finally regains its geometrical form: from flat to spherical.

Benjamin Bratton makes an interesting point in the article published in the “New York Times” when he analyses the commonplaces around artificial intelligence (AI), as humanoids that resemble us in any aspect. To be judged as such AI needs to reflect human intelligence. Indeed, according to Bratton, this approach looks similar to the Middle Age human centric astronomy, where the human body was at the centre of the universe. Well, we find out that we are not: the solar system is one of the millions galaxies and we are an infinitesimal part of an infinite space made of other millions of infinitive galaxies.

Perhaps we are not so unique and special to pretend that anything else should copy and envy us, as Ridley Scott’s Blade Runners portrays. Possibly the Anthropocene’s shift to a technological nihilism might rechannel a different approach to technology and let us discover something new about overselves. Following an analogous path John Brockman asks in Edge.org, what AI thinks. It would be interesting to give shape and sound to the voice of machines rather than making them speak as we do.

The “Nature” experiment looks to be the beginning