Member 1425
6 entries
13296 views

 RSS
Contributor to project:
Emergence and Navigating...
Immortal since Jan 8, 2008
Uplinks: 0, Generation 3
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • geson’s project
    Emergence and Navigating...
    Develop a generative, emergent process to fill space (2D or 3D) using only black lines. Modify a known process or invent your own. Implement your...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.






    video documentation of the final/compromised stage of the project.
    toggle the slider for a better idea of how the program really works. to ensure a faster loading time, the video's framerate has been cut to 15 so the breaking apart of the video pixels is inadvertently reduced to a split second.
    Tue, Mar 18, 2008  Permanent link

    Sent to project: Emergence and Navigating Space
      RSS for this post
      Promote
      
      Add to favorites
    Create synapse
     
    part I: process
    in the first part of the process, much of our effort was dedicated to developing an object in space which the user could navigate in. however, the 'object' in that space went through several transformations as we slowly adjusted it to our limited capabilities. initially it began as a spherical object with limitless particle emissions. due to a lack of better control over the complexities that arose, we turned to using a video feed as the object, and each individual pixel of the video a particle emitter. again, full control over the complexities of that was beyond our reach, and so a compromise was made.
    screenshots from the dummy/working version of the program:



    part II: compromise
    one of the major hurdles in pursuing our ambitions for our initial concept was perhaps our weak grasp of the tools at our disposal. so to compromise for not meeting our initial ideas, we made several, if not a whole lot of, tweaks to the predecessors of this final version. one of the major changes that took place in this process is the decreased emphasis on the emitter and the particle effects, which have severely raped the computers we've worked with (especially in the dummy versions with the points). an increased emphasis was then shifted to pushing the visuals more so than the potentials of the program, and a few workaround solutions were developed: 1. create an emitter that isn't constantly updating. 2. make the emitter emit the pixels that will reflect the video feed. 3. thoughts were given to making the video pixels emit from the emitter at start, but we struggled to gain control over the individual video pixels when that happens.
    in terms of the project prompt of navigating space, our program doesn't really allow for much 'navigation' in the traditional sense of the word. rather than creating a space for which the user could navigate in, we instead ended up creating a tool for which the user could use to navigate through the elements of a given space(which in this case are the individual colors that made up a space). so our program is best summarized as a kind of tool for a kind of navigation in a given space as opposed to a virtual tour in space.

    below are some of the screenshots which describe the program better than the terror that was the past presentation.

    a few notes on the images:
    +initial camera feed - define the 2d space. everything happens in real-time until the RMB is triggered and the pixels are broken apart.
    +RMB initiates a break-apart effect which separates the video into layers on different z-indexes. the z-index is determined by the brightness AND the color value, so the darker the color the further back toward the screen.
    +LMB retracts the individual pixels and pulls camera to opposite side for a different view. as is seen in the last image.
    +camera is constrained to zooming in and out and panning up and down with limited range. limitations were set to compensate for poor grasp of the camera. final image shows camera fixed at the opposite end of the camera feed.
    +the navigation is mostly through the colors that made up a space.
    ++first image is only a thumbnail.












    so the final word is: the program is a tool that gives the user the ability to break up a given moment in space into a pseudo-hologram (based on the depth as determined by the brightness of the colors that make up the space), and thus grants the user the ability to explore the colors that make up the space as individual pixels and objects in space.
    Sun, Mar 16, 2008  Permanent link

    Sent to project: Emergence and Navigating Space
      RSS for this post
      Promote (1)
      
      Add to favorites (1)
    Create synapse
     
    the intention of the project was to create a somewhat complex colony of spheres using jared's box fitting code as the base code for th emergent process. for the most part, i was more intrigued by the visual process as opposed to the code itself (which i tweaked without a comprehensive understanding of its structure). what was most successful about the program was the unprecedented 'swarming' of the spheres in certain areas of the canvas. in jared's code, the rectangular boxes arrange themselves based on some kind of a collision/color detection. in my blind modification of jared's code, i unintentionally forged the attributes that jared had intended for rectangular boxes to spheres/ellipses. as a result, all hell broke loose when it came to collision detection, but surprisingly it yielded a pretty nice visual result. so this is essentially jared's box fitting code wrongfully applied to non-rectangular objects, and with additional variables and functions that control the flow, positioning, and outcome of the code.


    /final print - recorded at the 107th frame


    /preferred output - recorded at the 955th frame

    final word: some things are designed to be viewed on the screen (preferably at resolutions under 1680x1050), and this emergence project turned out to be one of them. in print, there are particular areas in the final composition (which i did not intend the viewer to see) blown up to scale in high-res. moreover, the program could only run less than 1/7 the time of its desired length due to severe out-of-memory errors that keeps the program from rendering past the 107th frame. therefore, what is presented on screen here (in the second image) and in the previous posts are the preferred the representations of the project's outcome and visual process.

    Sun, Feb 3, 2008  Permanent link

    Sent to project: Emergence and Navigating Space
      RSS for this post
      Promote (1)
      
      Add to favorites
    Create synapse
     
    the concept of emergence is somewhat self-referential in this program..... unable to work my brain around the base code (which is shamelessly borrowed from jared tarbell's 'box.fitting' example), i ended up just toying a little with the base code while trying to discover new ways to push the image to a different level. unfortunately, with my insufficient understanding of the base code (and coding in general), the extent to which the final image can be pushed is severely limited...and as a result i mostly only introduced new variables and functions to the base code instead of restructuring the code as to make it work the way i had intended. so at the end of the day the concept of emergence is slightly noticeable in the surfacing of the complex bubbly pattern, which was constructed on a visual basis as a simple program as a opposed to a highly sophisticated program with behavioral attributes and state-of-the-art collision detection.

    NOTE: recording the video either unexpectedly crashes the program, or sometimes causes the program to not run/respond at all. a series of screenshots and the original tablet in the web folder have been provided to compensate for the unexpectedness of this.


    1/

    2/

    3/

    4/ when variables dictating the opacity are set to higher levels
    and the images look more exciting than the program.
    Mon, Jan 28, 2008  Permanent link

    Sent to project: Emergence and Navigating Space
      RSS for this post
      Promote
      
      Add to favorites
    Create synapse
     
    REWORKED CONCEPT: visually construct something that is in conversation with the concept of emergence. therefore, create an abstract visualization of emergence using only visual attributes as opposed to behavioral attributes. the current concept speaks to the idea of immersion and emergence, which was inspired by the complex patterns that arrays of weightless bubbles can create.
    Wed, Jan 16, 2008  Permanent link

    Sent to project: Emergence and Navigating Space
      RSS for this post
      Promote
      
      Add to favorites
    Create synapse
     
    image 1:
    http://www.artnet.com/artwork/424180066/545/erwin-wurm-convertible-fat-car-porsche.html
    in the context of space collective:

    The underlying spirit of Erwin Wurm's sculpture is described, in the online catalogue, as a "dynamic act" as opposed to a "static object." For the most part, I guess it is perhaps the evidence of a 'process' behind the car's obesity that gives this static piece its dynamism, and the dynamics of the sculpture is experienced largely through the cognitive 'interaction' it provokes between it and the viewer. This image had me thinking on one particular level: as a social/technological commentary . It seems to be showing the consequences of taking the 'space' we have for granted. In this age and time, we are entitled to expand our technological endeavors with whatever means we have at our disposal, yet no offense to the porsche, we have been pouring money into improving upon commodities and things that just really don't matter in the long run. The obesity of the porsche then goes on to show how we can easily become bloated with useless/meaningless endeavors, when we can be roaming/exploring the space around us to find better uses of our efforts and time. So it seems to me like the image is saying that we should stop feeding the porsche our time and effort, or we will stagnate at one spot and fail to take advantage of our potentials.

    image 2:
    http://upload.wikimedia.org/wikipedia/en/3/3d/2ch_AA_Characters.gif
    In the context of space collective:
    Re-occurring 2channel characters/expressions generated with SHIFT_JIS characters. Considering how this is just plain ascii, in the context of the assignment, it may seem anything but fascinating. I find SHIFT_JIS art to be, however, far more elaborate than emoticons. Like emoticons, SHIFT_JIS art is a visual language of expression that can be comprehended on a visual basis alone. yet, unlike emoticons, they're relatively more culture-specific. in being somewhat culture-specific, they also become inherently more 'unique' than emoticons, considering how to an extent the visual language of SHIFT_JIS art could not be fully understood on just the visual level.
    Wed, Jan 9, 2008  Permanent link

      RSS for this post
      Promote (2)
      
      Add to favorites (1)
    Create synapse
     
          Cancel