Member 2746
2 entries

Immortal since Sep 6, 2010
Uplinks: 0, Generation 3
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    by Metaman @OPEN_INTEL

    My last article, ‘When two worlds collide’ identified the two principal characteristics of the emerging forms of knowledge – as flow rather than documents, and as user generated, and user ‘curated’ content, rather than professionally generated and ‘managed’ content.

    The core assumption behind the design of software tools to help people to cope with these new forms of knowledge is that algorithms, typified by Google’s ‘relevance ranking’, are the best (indeed the only) means to deal with the unimaginable quantities of information available. Without these sorts of powerful algorithms and the immense processing power of vast server farms, people would drown in the torrent of information, or be lost forever in a huge and uncharted sea of data.

    Unlike pure Boolean searching where people can understand how their queries work, the assumptions, logic and mathematics behind algorithmic solutions are completely unknown to users. They must have blind faith that the blackest of black boxes really is returning the results most pertinent to their query, because that’s what the software providers say. Of course, users can never really know because they have no idea of what knowledge is out there in the first place. They can have no idea of what the algorithm might have missed. This is not surprising since the algorithm cannot know what the user really wanted.

    In addition to this radical uncertainty, software that calibrates relevance on the basis of aggregating and analysing users’ search behaviour carries the added risk of the tyranny of the crowd as popularity becomes of the arbiter of relevance. Nevertheless, the overwhelming bulk of research and development funding is going into the algorithmic computing culture.

    Even the US Air Force is in on the act. According to April’s Wired magazine: “The Defense Department is continuing its push to reduce human thought and human action to a few lines of code.” The Air Force Office of Scientific Research1 is reported as, “looking to build ‘mathematical or computational models of human attention, memory, categorisation, reasoning, problem solving, learning and motivation, and decision making’.”

    The social and intelligence consequences of this algorithmic approach have been enormous. Since the mysterious algorithms are supposed to do all the work, people have not had to learn the basic skills necessary for effective knowledge management (KM). Indeed, the costs of this profound ignorance must be even greater than all the money spent on the years of chasing the chimera of the intelligent machine.

    Although most of the money has been spent on the development of algorithmic intelligence, the Web 2.0 phenomenon has produced a spontaneous flowering of the use of human intelligence to help make sense of the knowledge on the web. This includes tagging, folksonomies, systems enabling guidance by following others and their recommendations, or systems that measure the value and popularity of both knowledge items and knowledge flows.

    What all these have in common is that they rely on people’s judgements about relevance, not algorithms. Another thing they have in common is that they were created by amateurs with little knowledge of the techniques of KM. The final thing these systems have in common is that they do not exploit the power of structure to open vistas on knowledge flows.

    What if basic KM skills, tools and taxonomies were to be learned by large numbers of social media users? Wildcat, the original ‘knowmad’, puts it like this:

    “Knowmads are developing a particular set of skills that will in time become the normative expression of extra ordinary cognition, I refer specifically to minds that are developing an exceptional capability of enhancing the health and wealth of an infocology by their choices.

    These choices of what to publish and what to point, what to prune and what to emphasize are the key parameters of these intersectionists (hyperconnected individuals locating themselves at the intersection of feeds of information in advanced infocologies)2.”

    This process is much more like music and dance, than an algorithm. Music and dance would be impossible without the structures of rhythms and harmonics and agreed rules for playing them. Like music and dance, the knowledge flows of infocologies unfold through time, rather being stuck and forgotten in past time.

    Also just like music and dance, the play can only begin on the basis of some agreed conventions and structures. Think of the difference between dancing the Waltz and a Charleston.

    Meaning, whether in music, dance or knowledge flows, comes from imposing a structure on movement of the body or of information flows through time. The game is about channelling, monitoring and inferring significance from the flows. But before the play can start, there needs to be some agreed rules and taxonomies. This is the power of structure. Are people ready to embrace this knowledge en masse? Perhaps not, although it must be said that ideas whose time have come do spread like wildfire over the web.

    Then knowledge managers would morph into knowmads channelling and adding value to the information. And users would become knowmads too by becoming proficient knowledge managers. And everyone would live happily ever after – except the relevance algorithms which weren’t alive in the first place.

    Jan Wyllie is the founder of Open Intelligence. He is also the author of one of Ark’s most successful reports, Taxonomies: Frameworks for Corporate Knowledge. He can be contacted at


    1.  ;
      Add to favorites
    Synapses (1)
    Today, the frontier of human productive capacity is the power of extended collaboration – the ability to work together beyond the scope of small groups using the new tools of collaboration. ‘The Moment Social Media Became Serious Business’, Harvard Business Review, Jan 19, 2010

    by Metaman @OPEN_INTEL

    Creative confusion: A world in flux
    Even after all these years I suspect that, like myself, many people still have lots of questions to ask about what Web 2.0, Enterprise 2.0 and, of course, KM 2.0 are – and what the 2.0 means. No doubt the terms are also afflicted by the universal curse faced by knowledge managers, which is that they are liable to mean different things to different people.

    Fortunately, doubt and uncertainty are essential parts of the new world of knowledge, which is emerging through the combination of new virtual connection technologies and new, less ideological ways of thinking.

    There are two characteristics of KM 2.0 and social media – the profile of which are pretty clear:

    1) Knowledge is no longer conceived as locked away in documents with the only key being retrospective retrieval. It comes from a non-stop multimedia flow, where the important nuggets need separating from the waste in real time. If you have to search for it, then it is liable to be already too late; and

    2) Professional knowledge cannons are complemented (or is it swamped?) by user generated content and knowledge flows.

    Collective technologies: Towards collaborative intelligence
    There is also no doubt that a new generation of online software has been crucial in what is widely perceived as a step change in the way people relate to each other.

    The first wave (2.0) was software designed to help people to manage and co-author documents, share files and organise virtual meetings. The second wave (about 2.3) enables users to create their own knowledge flows from both materials of their own making (blogs) and of bookmarked items (with or without comments). Twitter is a tool enabling both functions.

    Unfortunately, the user-centred nature of 2.0 technologies has lead inevitably to a massive waste of resources, as meaning and pertinence is drowned out by the noise and sabotaged by amateurish mistakes. It’s a bit like the first days of desktop publishing (DTP) in information management terms. What happened then was that the mass of ‘DTPers’ learned the basics from professionals and standards improved markedly.

    The same could happen in KM 2.0 and social media. People are already talking about ‘professional citizen journalists’. Daniel Durrant writes on Amplify: “An uncomfortable rift may arise between professionally defined ‘Journalists’ and noise making users unless we create systems that work for the both camps. A little bit of noise is fine, but music is better. Journalists and citizens are capable of working together to enhance the quality of online media.” (

    The same could be said for the rift between knowledge professionals and knowledge users, who seem to steadfastly refuse to take on board even the most basic KM practices. Just for starters, I would bet that over 95 per cent of UK online knowledge users would not know how to formulate a Boolean search, let alone begin to comprehend faceted taxonomies.

    Here, I must confess that after years working as a journalist and a content analyst, I had never heard of Boolean algebra until I met Tony Kent about 30 years ago. Using his text database software he quietly ‘blew my mind’ with the amazing power of Boolean search strategies, both on full text and structured metadata. In the years that we collaborated, one job I had was teaching secretarial staff to use , part of which, was blowing their minds with basic Boolean operators used to build up sets which could then be re-combined again using Boolean operators. In most cases, intelligent people, such as secretaries, can learn to use Boolean algebra quite effectively within about an hour or two. One of the great things about working this way is that it really makes people *think* about what they are looking for

    Tony Kent used to be very dismissive of relevance ranking algorithms compared with the precision recall of an intelligent Boolean query. How much knowledge have people lost by casting their search fate to the wind of Google relevance algorithms, for the sake of an hour’s teaching?

    The next article, Part 2 (Algorithmic culture: The power of structure; Spirit of enquiry: Open questioning) will be about ways to bridge the chasm between users and professionals. As with any relationship, both parties will need to learn from each other.

    Jan Wyllie is a founding director of Open Intelligence. He can be contacted at
    Mon, Sep 27, 2010  Permanent link

      RSS for this post
      Promote (1)
      Add to favorites
    Create synapse