It’s all about context

Semantic search seeks to improve search accuracy by understanding the searcher’s intent and the contextual meaning of terms as they appear in the searchable dataspace, whether on the Web or within a closed system, to generate more relevant results. Semantic search systems consider various points including context of search, location, intent, variation of words, synonyms, generalized and specialized queries, concept matching and natural language queries to provide relevant search results. Major web search engines like Google and Bing incorporate some elements of semantic search. (from Wikipedia)

For the common good we should get familiar with semantic search, as it will soon change the way we acquire knowledge and learn. In this article in Forbes it is illustrated a a very interesting perspective on how our interaction with recent AI based technology is shifting our methodology of learning. The metaphor which I think works the best, as explained in the article, is to consider current search engines as funnels. We search for something and get a narrowed list of answers. The more the search engines knows about us, the better the answer is accurate. However human brain is great on pattern recognition; we are capable of linking different elements together – which includes our own background, our expectation, our bias – that affect what we are looking for. In the book Diffusion of Innovation Everett M. Rogers defines information as a process through which we reduce uncertainties when in an unfamiliar context. In other words we search to get a better picture of the context; we are not naturally looking for a specific answer, but an answer located in a context that makes sense for us.

Hence Google knowledge graph-based search. It is a search process, introduced in 2012, which looks at the context, at the user, at the location, the language, i.e. the context the search happens.

However when we speak to Siri, Google Now, Cortana, Alexa, etc we don’t get anymore a series of links, but a single answer. We are no longer allowed to select what we want to read, either the source. According to the Forbes article this system will reshuffle the information business.

Nonetheless in this post I would like to focus more on education and research. Looking for different kinds sources, connecting/disconnecting them, prooving the “truth” are actions at the core of research and knowledge. If we have a single platform that provides answers, will we be still entitled to understand where information comes from? My provocation to this is the following: context and mapping.

My guess on how human being will still find ways to be curious, investigative, to challenge assumption and axioms (for Google  we would be probably still living in a flat earth, if the answer was provided by algorithms that assemble information) will be the skills on understanding and compare different contexts and map answers in a bigger picture. Basically flipping the way Google intends to map its search engine and use it for exactly the opposite. We may be able to funnel research the other way around.


The Architecture of the City: Content Maps, Data, Space and Design

Last May I gave a talk at the Scene Gallery in London, which I called “The Elegy of Public Space”. The talk looked at spatial effects in physical space as drawn by the language of “Content Maps”. I called “Content Maps” those GPS maps that display the city under specific themes. Uber with its drivers, Airbnb with the available places, Foursquare and Yelp with leisure or Zoopla and Rightmove (among many) for housing hunt. Under “Content Maps” the city is a collection of themes whose adjacency constitutes what we once called city. “Content Maps” flat the complexity and intricacy of urban space (with its pedestrian, square, benches, lights, green areas, etc..) for rendering the city as clusters of cloud information.

Where is urban design? Well design is the allocation of new private space to be managed according to a specific theme. Once established, then streets, bus stops, facilities, and so on, come along.

The top of this trend will be reached once Google, or Apple, will put on streets driverless cars that will possibly introduce a new infrastructural revolution to the way we (pedestrian users) will experience urban space.

In this post from Dan Hill argues about the lack of design in contemporary cities. Cities are data clouds that network companies manages for third agents. My last slide at Scene Gallery represented the London Garden Bridge as the effect of current urban politics, where general public assumes that physical space is private as much as the digital one. It is a big kind of Facebook piazza owned by private companies. To some extent we are already going there.

The lack of architecture in the space of the city is result of different interwoven factors. In my view there is a general lack of understanding of data.  Data, beyond their use for scaling up and down stuff (utilities, square, infrastructure) and beyond infographic representation of phenomena, have a valuable urban design role. The flexibility of understanding real time behaviour is an element that can be integrated into the analysis and design of the urban fabric, where with urban fabric I intend the space that citizens  dwell everyday. I do agree that the kernel is not the building but the network , which constitutes the contemporary urban tectonic of exchange points. In other words buildings  are terminal, or interfaces (if I can borrow words) that enact urban behaviour.

When thinking about the city scale is the first element thats should come in mind. We don’t have the scale of screen, i.e. apps that can understand the territory, but architecture that displays urban life.