Saturday, December 29, 2012

C4CNC101 - Section 1: Intro To Functions

DISCLAIMER:
(1) If you are already familiar with functions, variables, types, otherwise coding basics, C4CNC101 is not for you.  I'd recommend taking a look at something like The Nature Of Code if you're interested in getting up and running with processing.
(2) A familiarity with digital art in general, including coordinate systems, pixels, etc will be extremely helpful.  If you've ever used Photoshop, Illustrator, or any other digital art program, 2d or 3d, you should be good to go.
(3) If you're a programmer, you'll probably find tons of inconsistencies or things i'm glossing over.  My goal here is not to teach programming, it's more to get people who want to get into using code as a tool to create or augment the creation of art up and running, enough to give them the foundation knowledge to research deeper if they so choose.  I've done alot of thinking about this and I believe the information as I've presented it is true in spirit and in the scope of processing.

header
    Ok, so hopefully by now you've downloaded and installed processing, signed up for an OpenProcessing account, and joined the C4CNC Classroom on OpenProcessing.  The first step is really the only requirement, but i do recommend at least peeking around OpenProcessing to get an idea of what's possible.  I'll warn you in advance that if you're just starting out, it can be pretty easy to get overwhelmed by the breadth and depth of content therein, but fear not!  Hopefully by the time we're through these first five lessons, you'll know enough to read through some of the sketches and even build your own sketches based off of them.  As I mentioned in the last post, if you come across any sketches or effects you'd like to remix, breakdown, or dive into deeper, let me know and I'll work something out for a future set of tutorials.  Alright, so let's begin!

    First, let's conceptualize a computer program as nothing more than a set of commands or instructions that processes information and produces results based on the specifics of the information and the commands.  While that's a bit of an oversimplification, on some level this holds true for any program, from the small visualization sketches we'll be writing here, all the way up to full on operating systems like Windows or Linux.  We call these instructions functions and we call the information data.  So let's write our first program.  Open processing and type the following function:

ellipse(50, 50, 50, 50);

    Once that's in place, press the Run button (it looks like a 'Play' button) in the upper left hand corner.  Alternately, you can check out the sketch on OpenProcessing(1-1: Basic Functions), although I highly recommend you follow along by typing the code yourself to get the most out of these lessons.  Either way, you should see something like the following:

Step1_0

CODERSPEAK: When we issue a command in a program, we say we are calling the function or making a function call, and when we provide data to a function, we say we are passing an argument (or arguments).  So when we issue a command and give it some information, we are calling a function with arguments.

    This may not look like much, but it's actually a valid processing sketch, so congrats.  In some languages, Python for example, a single function like this could also comprise a valid and complete program, so not bad for a first step!  Sure, it's not very exciting and doesn't do much, but we'll get there.

    Now, let's take a moment and break down our function call.  For our intents and purposes, every function call will be a name followed by a set of parentheses.  If we're passing arguments to the function, they'll be between the parentheses, separated by commas.  And finally, we end our function call with a semicolon, so processing knows to move on to the next function.  Thus, the skeleton for any function call is:

functionName(argument1, argument2, argument3, etc);

    Recall that we started out by defining a program as a set of commands(functions) that processes information(data) to produce a result.  Arguments are how we provide the data to a function.  In cases where we're passing multiple arguments, each argument is used by the function to perform a specific task along the way to producing the final result.  So in the case of our first sketch here, as the programmer we're telling processing to:

Draw an ellipse with a position of 50 pixels along the x-axis and 50 pixels along the y-axis, and a size of 50 pixels along the x-axis and 50 pixels along the y-axis.

    Most, if not all, publicly available coding tools and environments have references that describe (some in more detail than others) what each argument does.  For example, take a look at the reference page for the ellipse() function, which not only details the arguments, but also provides some useful tips on calling ellipse().

    Alright, so let's practice a bit by adding a few more functions.  Add another function before the ellipse() call, so your sketch contains the following function calls.  Note that we're changing some of the arguments to the ellipse() call, and you should feel free to change any of the arguments to any of the functions.  Experimentation is a key to learning!

size(400, 400);
ellipse(200, 200, 50, 50);

    As you can probably tell from the result, the size() function sets the size in pixels of our sketch's window.  Even though both functions take a different number of arguments and produce markedly different results, you can see that they both follow the same skeleton we outlined above, i.e.:

functionName(argument1, argument2, argument3, etc);

    Before we get a little more advanced, let's add a few more basic processing functions, again for practice, and also to see how we can affect what we're drawing on-screen so we can start getting an idea of the kind of drawing functionality that processing makes available to our sketches.  We're going to add three more function calls in-between our size() call and our ellipse() call: background(), stroke(), and fill().  Type these functions in as presented below:

size(400, 400);
background(0, 0, 0);
stroke(255, 255, 255);
fill(0, 128, 255);
ellipse(200, 200, 100, 100);

    As the saying goes, the more things change, the more things stay the same.  As we add functions, we see the results compound and the output become more complex, but in the end, all functions are called in the same manner using the same syntax.  Feeling comfortable typing in functions?  Then give the following exercises a try and see what you come up with.  Questions?  Please post them in the comments!

footer
EXERCISE 1: Draw 5 different ellipses with different radii and in different locations.  Be sure to check out the Processing Language Reference for ellipse() for more details on how the ellipse() function works.  Try changing some of the arguments to the other functions as well!
Exercise 1-1

EXERCISE 2: Take a look at the Language Reference for background(), stroke(), and fill().  Now, take the previous exercise and change the stroke and fill color for each ellipse. While you're at it, change the background color to something a bit friendlier than black, it's getting a bit gloomy in here...
Exercise 1-2

CODERSPEAK: You might be wondering how processing knows what to do when we call any of the functions presented here.  Well, most, if not all programming languages and environments come with a set of pre-existing functions and data that we use to build up our programs initially, which you'll often hear referred to as built-ins or library functions.  When writing programs, you'll use a combination of both built-in functions and data, as well as functions and data you define yourself.  We'll discuss this process in the next couple lessons.






PREVIOUS ARTICLES
Foreword
Project Preview

Friday, December 28, 2012

C4CNC: The Road Ahead...

    If you've already stopped over to the OpenProcessing Classroom, you'll notice there's not a ton of really exciting stuff there (yet).  The idea is that we're going to ramp into more advanced sketches from simple concepts, so we'll start slow-ish to get folks comfortable with both the ideas and just the act of typing code, then we'll start to speed up as we build the actual project.  That all said, I figured I'd toss up a quick preview of where we're going to end up by the end of C4CNC 201.

Step1_0
We'll start here...

teaser ...and end up here.
Watch the sketch in action

    If in your travels through OpenProcessing (or elsewhere) you see one or more effects you're interested in learning about, let me know! Given my limited creativity, I'm always looking for new project inspiration for tutorials, would be fun to piece together a tutorial from a few different effects/sketches. I guess I could just do that myself, but i'm curious as to the nature of other people's aesthetic sense...in a non-creepy way. Anyway, back to work, I should have the first lesson up later today.

Thursday, December 27, 2012

Coding For Creative Non-Coders

    I'm going to stop short of saying I've become a bad programmer over the last year, but I will definitely say I've written alot of bad code.  Well, alright, that's actually not true on two counts:
  • It hasn't been bad code, just poorly, well, no, lazily architected code.
  • I didn't really become a bad programmer, i just maintained my level of already not good programmering.


records_top
    That said, it's been really interesting making the transition from tools...eh...programmer to software prototyper/artist.  It's gotten me thinking quite a bit about what elements of coding are important to me as an artist and more than that, how I would present the idea of programming to someone who was interested in software art, or creative coding as it were.  I've often ranted about how the reason more people don't get into programming is not because the act itself is hard, but more because of how it's taught.  That's the problem I know I had with learning C++ especially, all the books start out with variables, then move onto simple functions, then eventually get to the meat of the language, but by then I'm just bored.  One could argue that people need to be introduced to the concepts slowly, which I agree with, but I think it's the order of the presentation and the way we connect concepts to each other that's up for review.  For example, when I'm in my (somewhat questionable) artist mind, I'm a very visual learner, I like to see things happen.  So when I have to read chapters that drone endlessly on about simple variables, of course I get bored.  In the same vein, I like building things.  I feel like if programming were presented in the context of building something, rather than a set of disjointed programs/code listings that just focus on single concepts, it would be easier for people to make the connection from theory to application.

sineart_banner

    So where am I going with this?  Well, me being the arrogant know-it-all that I am, I think I've come up with a good way to teach, not programming in full perse, but enough to get someone started down the path of creative coding.  Even though I'm convinced the world doesn't need another set of "Learn to code using processing" tutorials, I feel like I need to write this line of thinking down because it may just be useful. Or I may be crazy. Or...crazy useful, right, right? Anyway, if you're interested in following along, I'd recommend a few things before diving in:
    I'll admit i'm picking a rather lame time to kick this off, as I'm up to my nasal hairs in CES demos, but my goal is to have the first five sections done by mid-February. Stay tuned and I'll see you in the first lesson!

records_btm

Thursday, August 30, 2012

the perfect engine

        Every time i type "blogger.com" into my address bar i'm reminded of a rant penned by a game writer a while back railing against the idea of being called a "journalist" because he was, in fact, a blogger, and as such didn't need to be bound by the tenets of that horribly oppressive, creativity-killing force called "journalistic integrity".  This got me thinking about a conversation I was having a little while ago with a colleague about the death of Triple-A game development and consoles as we know them today.  I wondered at the time if that would give rise to a new form of game journalism where game writers aren't afraid to put together well thought out and well written articles that reflect the notion of an older readership, as opposed to  the current race for they eyes, hearts, and minds of 13 year old boys.  Oh well, one can dream.


Shuriken Particles

        This idea of games and misguided representation, self or otherwise, has been quite a topic of thought for me recently, especially where the term "interaction" is concerned.  Don't get me wrong, I love lying in bed playing xbox games on the ceiling (projectors, so awesome) or sitting at my PC playing whatever F2P MMO has my attention for the next month, but i've started to re-classify this sort of activity as something other than "interaction".  I mean sure, in the loosest sense of the word it is an interactive activity, but when I think of myself as an interactive developer/artist/performer, this is not what i classify as "interaction" or "interactivity".  I'll concede that games are a subset of a broader interactive media and entertainment classifier, but games do get most of the spotlight and are driving alot of the technology in that space.


Relentless, The REV

      Therein lies a problem.  We can look at things like the Wii and the Kinect and talk about how they redefined interaction, which sure, they did, but this kind of merging the physical and digital world through different interaction paradigms is based on much older work (watch Underkoffler's TED talk for the details) that most of the public is probably unaware of, or at least, has relegated to the realm of science fiction, or more recently to games and academia.  So of course, big technology hasn't had a huge incentive to make sweeping changes in how we interact with technology, but then i suppose that could be chalked up as "good business".  Unrelated, I would argue that "good business" is why the country is in such a sorry state, but that's for a different blog post by someone who isn't me.


More Shuriken Particles

        So this presents some really funny irony, at least from my perspective.  While alot of advanced interaction is being developed for and because of games, that idea of "good business" means that middleware vendors aren't necessarily rushing out to build functionality to support this sort of thing into their products, because how do you sell depth camera support to an industry and consumer base who have written depth cameras off as motion control gimmicks? Funny, this is the same ideology championed by a man who thought one of the killer features of his game was that you could pick up your own shit, but whatever.  I'm not here to skewer said vendors for not including this sort of support, but it does bring up an interesting idea, that of "graphics engines" vs "interaction engines".  It's been a running debate, the idea of Unity vs UDK vs ofx vs Cinder vs whatever, so it's alot to think about.  The distinction was solidified best to me in a post on the Cinder forums on the same topic.  Someone wrote "Life is easy if you stick to loading models (animated or not) into the scene graph, setting up transitions triggers, using basic particle emitters...", whereas "Cinder is made in a way that it's easy to import foreign technologies."  That pretty clearly outlines/sums up the difference between graphics and interaction engines in my experience. Being able to just have webcam access as a ready made call, or an easy path to adding an external SDK for some non-native hardware is key to quickly prototyping and building advanced interaction. While alot of game engines do have facilities for this, it's not always quick and easy, but I don't feel like this is the way it has to be.  I've always felt that one of the things that's really helped me think in terms of interaction design is having been around games for so long, so it seems logical that there would/could/should be some sort of convergence.  Not that there isn't already, search Vimeo and you'll find games made with Cinder and ofx sitting alongside interactive installations made with Unity.  I think if it were easier to build either type of experience on either platform, we'd truly have convergence between the idea of games and advanced interactivity/interaction.


Addition Subtraction

        Ultimately, I'm not saying Unity/UDK need gesture recognition or anything like that, just an easier way to get it into the engine.  Keep the .NET integration current, get rid of UnrealScript, etc, don't predicate everything on the idea of mouse and keyboard events, or at least give me an easier way to hook into them, that sort of thing.  No reason for game engines to stay in the dark ages of interaction and interaction engines to stay in the dark ages of rendering.

        Just some thoughts, nothing really meaningful here, i just hadn't blogged in a while and felt like writing some words.  For the record, I'm using both Unity and Cinder.  I've been playing with some new Unity stuff recently that's got me super excited, i'll be able to talk about that pretty soon here...


Cymatic Ferrofluid

Wednesday, July 11, 2012

we maed a v1d w1th f!shez 1n it!!1

        So i've been following this whole OUYA thing somewhat closely, and as much as I feel like I don't know enough about their business plan to kickstart the project, I feel like i'm actually a pretty fair representation of the target audience.  I think about how i use my Xbox nowadays and yeah, it's pretty much just to play XBLA games.  That makes the OUYA at 99USD, pretty much an impulse buy for me, so i'm supposing if there's enough content, it would make sense as a gaming solution.  Here's the problem, i don't play XBLA games that often.  I either play a PC game because i'm on my PC working and i need a break after a few hours or I play games on a mobile device because i'm out and i'm bored waiting on something for a bit.  In that vein, i thought it might be cool to have a mobile android box with an HDMI out to run processing sketches on, if only i didn't have to plug it in.  Really that's probably not a huge issue though.  If the hardware ends up being as open as they say, might be moddable to this end.  But then i suppose the question remains why i still just wouldn't use an android tablet...I know, i know, don't kick if you don't believe, but hey, it's internet, i'm allowed to have opinions.


...Mobile p5 engine..?

        ...But let's be honest, you're not here to listen to me rain on internet's parade, so let's get onto the meat of it.  As the title sorta tries to allude to, we built a sort of installation!  Or at least, we got something to an early workable stage.  Sometime last week, Chris had the idea that we should motion track Annie's betta and use the resulting data to drive a flocking simulation.  To keep the footprint small, we decided to use a webcam based solution.  I spent a few hours working on a basic frame differencing solution using some code from processing.org to tweak the overall performance and output and came up with this:


        I did put a version up on my github, it's not quite as usable as I want it to be (a bit more dependencies than i'd like), but in a revision or two it'll be where i'd like it to be.  Meantime, it's definitely usable enough to do your own simple tracking based sketches, so have at!  If you're interested in putting your own motion tracker together, check these out:

Frame Differencing by Golan Levin
Frame Differencing with GSVideo

        We got all the code merged and tweaked to an initial state last night, and here's the result so far, fish courtesy of our buddy Ermal's Fishcam:


Live Fish Flocking from Chris Rojas on Vimeo.

        Next up, live tracking Annie's fish per the original spec.  Annie had some interesting ideas about how we might be able to enhance with some projection or some kind of external display to liven the overall display up.  Version 2.0 incoming!  Created with processing, toxiclibs, and GSVideo.

Tuesday, July 3, 2012

in difference...

         ...we may find the answers we seek.  Or at least a cool way to do motion-ish tracking.      

        This makes the second time this week i've been researching something just for the hell of it and suddenly find a use for it a day later, altho I guess the rounding algorithm stuff did have an actual purpose.  Oddly enough, the project i want to use it for was waiting for me to figure out some version of this little snippet.  Originally I was thinking i was going to have to do it as a full on hand-tracking/skeletal tracking thing, but if I can figure out some smoothing, I think this'll work pretty nicely on its own.  We're putting together a mini-installation at work, details of which I'll not spoil for you here, but should be fun...

        I started wondering about frame differencing after repeated visits to testwebcam.com (yes, it's SFW).  Simple effect, but it looks really cool.  Also a great way to approximate motion on a webcam stream.  The initial implementation didn't take me too long, now I just have to implement some optical flow or maybe even just a cheap trailing-3 tap, who knows?  Of course, you're welcome to solve that problem yourself if you wanna copy-paste this into your own copy of processing and hit Ctrl-R...seriously, your copy of p5 looks a little lonely and unloved, you should do something with it...I did some tests with lerping and vector/distancing, but i think i'm going to need a real filter...

faketracker
This is totally not a photoshop, run the sketch if you don't believe me...

        One of these days I need to start seriously optimizing some (all) of these sketches and my ofx projects. It's ok to suck out loud for now, but truth be told, I think i'm actually a better programmer than that.  I mean, not that I'm a good programmer, i'm just decent enough not to make silly un-optimization mistakes.  Eh, this one'll optimize itself out anyway me thinks, i imagine the filtering isn't going to be cheap...I also really need to start taking video...
//PREPARE YOURSELF FOR THE COMING OF 2.0
//GET GSVIDEOOOOkay it's actually not going to be
//that big of a transition.
import codeanticode.gsvideo.*;

PImage lastFrame;
GSCapture vStream;
int diff;
int thresh = 32;
ArrayList<PVector> dVals = new ArrayList();
PVector p_m;
PVector lastP;
void setup()
{
  p_m = new PVector(0,0);  
  lastP = new PVector(0,0);
  size(640, 480, P2D);
  frameRate(30);
  lastFrame = createImage(width,height,RGB);
  vStream = new GSCapture(this, width, height);
  vStream.start();
  background(0);
}

void draw()
{
  diff = 0;
  loadPixels();
  dVals.clear();
  
  if(vStream.available())
  {
    vStream.read();
    vStream.loadPixels();
    lastFrame.loadPixels();
    for (int x=0;x<width;x++)
    {
      for (int y=0;y<height;y++)
      {
        int i = y*width+x;
        color c = vStream.pixels[i];
        color l = lastFrame.pixels[i];
        int c_r = int(red(c));
        int c_g = int(green(c));
        int c_b = int(blue(c));
        int l_r = int(red(l));
        int l_g = int(green(l));
        int l_b = int(blue(l));
        
        int d_r = max(0,(c_r-l_r)-thresh);
        int d_g = max(0,(c_g-l_g)-thresh);
        int d_b = max(0,(c_b-l_b)-thresh);
        
        int d_s = d_r+d_g+d_b;
        diff += d_s;
        if(d_s>0)
        {
          dVals.add(new PVector(x,y));
        }
        pixels[i] = vStream.pixels[i];
        lastFrame.pixels[i] = c;
      }
    }
  }
  updatePixels();
  if(diff>0)
  {
    p_m = avgArrayList(dVals);
    fill(255,255,255);
    ellipse(p_m.x,p_m.y,40,40);    
  }
  lastP = p_m;
}

PVector avgArrayList(ArrayList<PVector> arr)
{
  float sumx=0;
  float sumy=0;
  for(int i=0;i<arr.size();i++)
  {
    PVector c = (PVector)arr.get(i);
    sumx+=c.x;
    sumy+=c.y;
  }
  return new PVector(sumx/arr.size(),sumy/arr.size());
}

void keyPressed()
{
  if(key=='q')
  {
    thresh+=1;
    if(thresh>128)
      thresh=128;
  }
  if(key=='a')
  {
    thresh-=1;
    if(thresh<8)
      thresh=8;
  }
}

void stop()
{
  vStream.stop();
  vStream.dispose();
}

grabbing hands

        ...grab all the video frames they can.  Or maybe they don't...

        It's funny, i've always been a graphics-ish programmer.  I don't mean hardcore rendering programmer or any of that madness, but i've always been motivated to code by graphics.  Back in high school, the first thing I dove into while learning C was the BGI, mainly so I could learn how to code graphics for demos (never mind that everyone was writing demos in assembly, which i was also learning so i could...yeah, write graphics routines).  I can only imagine where I'd be nowadays if I'd had things like openframeworks to tinker with.  Of course, we did have GLUT back in my day, which i did spend a fair amount of time mucking about with.  Cool how somethings just stand the test of time.

        Taking the plunge into video capture with openframeworks tonight, whipped up another quick, bitcrushesque-type vis. Took some cues from some of the processing vidcap samples i've been doing, some ideas just keep working:


ofVidTest
Should you lose your disc, you will be subject to immediate de-resolution...


        ...And of course, here're some codes:
/* testApp.h */ 
#pragma once

#include "ofMain.h"

class testApp : public ofBaseApp{
 public:
  void setup();
  void update();
  void draw();
  
  ofVideoGrabber grabber;
  unsigned char* gPixels;
  ofImage img;
};

/* testApp.cpp */
#include "testApp.h"

void testApp::setup()
{
 grabber = ofVideoGrabber();
 grabber.setVerbose(true);
 grabber.initGrabber(640,480,true);
 ofEnableAlphaBlending();
}

void testApp::update()
{
 grabber.update();
}

void testApp::draw()
{
 ofBackground(0,0,0);
 gPixels = grabber.getPixels();
 for(int x=0;x<64;x++)
 {
  int xStep = x*10;
  for(int y=0;y<48;y++)
  {
   int yStep = y*10;
   int i = (yStep)*640*3+xStep*3;
   ofSetColor(gPixels[i],gPixels[i+1],gPixels[i+2],128);
   ofRectMode(OF_RECTMODE_CENTER);
   ofNoFill();
   ofRect(xStep,yStep,10,10);
   ofSetColor(gPixels[i],gPixels[i+1],gPixels[i+2],255);
   ofFill();
   ofCircle(xStep,yStep,3);
  }
 }
}

Tuesday, June 26, 2012

post reality filter

        You know, for as much as i love music, i never really considered myself much of a music snob, or at least, not a genre snob anyway.  The whole idea of having to create millions of sub-genres in some feeble attempt to differentiate yourself from the next guy other than just having cool music always boggled my mind. Even worse, the music critics and journos who would always be coming up with new genres just to, i dunno, safeguard their idea of what a particular genre was or some such nonsense...For as much as i rail on game journalists, it's probably true of most media journalists.  The genre qualifier "post" always bothered me the most, what you can't think of a new genre name, so you just take the laziest path?  "Oh, this is what comes AFTER <genre>", in which case it might just be arrogance, i.e. really, wow, you think your music is that much different that you alone are going to define what comes next?  Wow alright then...

        Anyway, here's a fun little video experiment i knocked together in processing yesterday.  You can copy and paste  the code below into a sketch and run it, should be that simple.  Ohhh yes, you'll also need GSVideo, as I'm not using processing 1.5's native video.  After reading a few pages on what was required to get it working based on software that may or may not really exist anymore and also reading somewhere that p2 was going to move to GSVideo anyway, figured i might as well take the plunge.  It's a pretty friendly library overall...on the subject of laziness, gotta admit i probably could've captured a video of this, but didn't.  That's probably ok though, you should really run this yourself and see the effect, maybe play around with it a bit, see what you come up with...


vidcap_02
Hello from videoland...

import codeanticode.gsvideo.*;

PVector gridStep = new PVector(16,8);
int gridX;
int gridY;
GSCapture vStream;

void setup()
{
  size(640, 480, P2D);
  frameRate(30);
  gridX = int(width / gridStep.x);
  gridY = int(height / gridStep.y);
  vStream = new GSCapture(this, width, height);
  vStream.start();
  background(0);
  noStroke();
}

void draw()
{ 
  if(vStream.available())
  {
    vStream.read();
    vStream.loadPixels();
    
    //Thanks, RoHS
    fill(0,0,0,8);           
    rectMode(CORNER);        
    rect(0,0,width,height);  
    
    for (int i = 0; i < gridX; i++)
    {
      for (int j = 0; j < gridY; j++)
      {
        int x = i*int(gridStep.x);
        int y = j*int(gridStep.y);
        int loc = (vStream.width - x - 1) + y*vStream.width;
      
        color col = color(red(vStream.pixels[loc]),green(vStream.pixels[loc]), blue(vStream.pixels[loc]), 32);
        float vradius = brightness(col)*0.1;
        
        fill(col);
        if(vradius<12.8)
        {
          ellipseMode(CENTER);
          ellipse(x, y, vradius,vradius);
        }
        else
        {
          rectMode(CENTER);
          rect(x,y,vradius,vradius);
        }
      }
    }
  }
}

Sunday, June 24, 2012

First Light

        When i was working in SoCal, i worked with a rather brilliant tools programmer who used to refer to MVPs as the "First Light" version of something.  I like that term, MVP sounds so stuffy and business-y.
     
        Probably the most impactful realization I've had since coming to Intel is that I want to be a creative coder when I grow up.  My absolute dreamest job would be to build interactive digital art pieces/installations, and I know it's possible, because there are people out there doing it.  The downside is that it's an exteremely niche market.  It's like Tech Art or Concept Art on steroids, that is, people want to do it, moreso than the number of positions available.

This does not deter me.

        As daunting a challenge is it may seem, I have a particular set of skills, skills that will only partially help me achieve my new position through poise and audacity.  To this, I must now add resolve (guess both of those movies and I'll...i dunno buy you a shot next time i see you).  My resolve is that I need to get the rest of the skills required for this sort of work, and in the process maybe build some simple starting out pieces.  As i mentioned in my previous post, the big skill I feel like I'm lacking is not a technical one, but an artistic one.  Taking a step back from visualizing tweets, I decided to go even simpler and visualize some simple data-over-time type sets.  So, armed with pygame, I settled on visualizing periodic functions (sine and cosine in this case), and decided to see what I could come up with.

sine_tplt
I started simple, just so I'd have a template to work with...


sine_vis
..but things got pretty crazy pretty quickly.

        Overall this was a fun exercise and really helped me get my feet wet.  The thing I realized over the course of doing this is that really the possibilities are endless.  I mean, this is just a simple 2d library with no interactivity.  I haven't even started getting crazy with processing, cinder, vvvv, or any of the other insanely cool toolkits out there, to say nothing of something like Unity or other game engines.  I've already started working on some interactive stuff, mayhap I'll post some of that next once it gets a little more presentable.

        I uploaded all the source from this to my Github, or you can just check out the results on my Vimeo channel.  I challenge you to see what you can come up with yourself, you might be surprised how addicting it is...

Saturday, June 9, 2012

"Fun" is part of "Functional"

        It's been a really interesting...week?  I dunno, to be honest, time has completely disappeared, which is a good thing.  I've always had some interesting thoughts about time, but that's for another blog post, or maybe a good smokeout.  Given the kind of work I'm doing now, a little bit of the ol' green might not be a bad idea, but we'll save that as a last resort.

        As I think i've probably mentioned to some folks, one of the things I'm working on now, or at least was supposed to be working on, is "new" interaction ideas, natural/new user interfaces, and all kinds of other buzzwords that add up to "do what nintendo and apple did, but better".  No pressure.  I'm going to be honest with everyone, i'm NOT an incredibly brilliant, innovative, or creative person, but i think i know what i like, which i feel like could be an asset in this space.  This all sort of hit home a little while ago while i was prototyping some tech for an interaction demo to showcase a...pretty freakin cool bit of technology my co-workers Stan and Sterling have been grinding on for a bit.  Here's a short demo:


        Apologies for the lame-o compression, it's been a while since i've been a video producing guy man, something i intend to change, and by that i mean change that i haven't been producing videos, not that i'm a guy man.  Aaaaaanyway...So here's the thing, I found myself just playing with this demo for...minutes at a time, seriously.  And by playing i mean, putting my mouse on one of the fireflies and watching it light up.  How crazy is that??  It's funny though, i'm reminded of a story Mom used to tell me about how Dad would take me to the arcade when i was a wee bern and sit me on the pinball glass while he would play.  Apparently, i would paw at the glass trying to get to all the shiny-movey things.  While i have no memory of this, i feel like it probably affected me pretty deeply (and relevantly!).

        Fast forward to about a week ago, my co-worker Chris showed me this video from The Creators Project (which I'm so going to next year), which really got me thinking.  Unlike me, Memo Akten actually is a brilliant, innovative, creative individual, and i could watch his stuff forever.  Seriously, TRY and shut this off once you start watching it, you won't be able to...


        This got me thinking a few things, in no particular order or connectivity:
  • Complexity from simplicity is beauty
  • Chronological doesn't mean linear
  • I need to start small
  • "Fun" is part of functional
  • Graphics programming can make things that aren't games
  • In digital world, data is just numbers, and numbers make pictures
        Now, the last two points, i imagine you're all going "well duh", but here's the thing.  I'm VERY new to all this.  For all i've been around graphics programming and interaction design, i have to retrain myself to think of things not as game tools or game concepts.  I understand that the delta between "game" and "interactive experience" from a high level isn't that great, but at the same time, there are things you wouldn't do in a game, presentation-wise, that i feel like is probably permissible in other spaces.

        So i decided to start small and visualize an obvious and relevant known: Tweets!  More specifically, mentions.  I wanted to especially get away from linear representations and just think of mentions as a big data cloud.  How would i display and navigate said cloud?  I grabbed Tweepy and pygame and started messing around with some different ideas.  Everything here is WIP, not even anywhere close to being a solid idea, for now i'm just playing around.


twalaxy
Iter 0: Branching off from the most recent mention, age denoted by line color.

twalaxy_2
Iter 1: Main mentions represented by orbits, age denoted by distance from center. Related tweets branch off of each mention.


Iter 2: Same as 1, related tweets represented radially.

        This is fun, but you know, I'm not nearly as drawn into this as I am from the fireflies for some reason, even after running it on some touch-enabled hardware.

        But that's the thing, I can figure that out.  Like i said, in this space, i'm a BABY.  I need to really just immerse myself in all this stuff, the way I did when I was a game developer.  Taking a bit of my own advice, I'm going to go back to square one and start small.  Taking a page from Memo Akten, I'm going to visualize some simple data-over-time sets just to get my head more into that space.  Taking another page from Memo Akten, i'm going to start with simple functions.  Maybe I'll use a cosine though, just to be slightly different, hehe...Stay Tuned.  I feel like i'm finally hitting a bit of a stride here, like I finally gave myself permission to work on this stuff.  It's funny, it's been such a big mandate for our group, but i've felt like...it was too much not like work to be working on.  Ah well...at least i'm changing before it's too late.

Friday, June 1, 2012

i can see forever

        It's funny how sometimes just being around a term or concept so much skews your perspective on it.  Case in point, at work, one of the hot topics is CV (computer vision), like, you're immersed in it here.  Sometimes it's hard to not feel like everyone in the world is researching/investigating/developing products against a certain technology and you're being left behind.  Of course, in the case of CV, that's probably absolutely true, i'm sure everyone is trying to figure out how to use it to drive the next wave of great interactive experiences, but then, i guess that's the real hot topic.  Anyway, i just thought it was funny how over the last month i've felt like CV is the new hotness and everyone in the world is playing with it...

cvtest
OpenCV...one of the few things that doesn't break when it sees my ugly mug...

        That said, the thing that caught my eye most at Maker Faire was SimpleCV, which is a wrapper around OpenCV (ya think) and a few other super useful libraries.  They had a few really cool demos running, but it was more being able to look at the code and see how simple it was that really got my attention.  Not that OpenCV by itself doesn't have a ton of functionality already, SimpleCV just has a few more convenience functions, not to mention some slick blob finding stuff (among other things).  The big (and very specific) hurdle I ran into is that SimpleCV uses libfreenect, specifically the python bindings, for kinect access, and from everything I read online, getting the freenect python bindings to play nice with Windows is a bit of a chore.  I must say, in the last week or so, I've been amazed at how many tasks i've undertaken that turned out to be...not so simple or well documented.  That means i'm either truly blazing the trail or I'm doing it horribly wrong and missing the obvious solution.  Let's assume the latter.


         Anyway, so after a day of trying to build python freenect and only being somewhat succesful, i decided to see if I could get SimpleCV and pykinect to play nice together.  Thanks to some hard work done by another Codeplex-er who goes by the handle bunkus, i got pykinect and the OpenCV python bindings talking.  From there it was a pretty simple step to get pykinect and SimpleCV talking, since SimpleCV talks straight through OpenCV.  Well, it would've been if i'd remembered to install PIL...As much time as i'd spend digging through the SimpleCV ImageClass source, you'd think the dependencies would've been burned into my brain.

Facepalm

         But you know, at the end of the day, I've got pykinect streaming into SimpleCV, which is a good thing.  There's been a ton of interesting work done in open source kinect space, but having had my fill of trying to deal with OpenNI, i'm more than happy to use a supported, iterated on SDK with the most current features.  Who knows what I might want to do next?  So here's some code, yes i know, it's horrible and un-pythonic and there are globals all over the place, but it works...cleanup is next. Actually, next I'm going to work my way through the SimpleCV book and see what happens.  This + multiple kinects?  If performance doesn't murder me, could be pretty cool...

import array
import thread

import cv
import SimpleCV
import pykinect.nui

def frame_ready(frame):
    global disp, screen_lock, img_address, img_bytes, cv_img, cv_img_3, alpha_img
    with screen_lock:
        frame.image.copy_bits(img_address)
        cv.SetData(cv_img, img_bytes.tostring())
        cv.MixChannels([cv_img],[cv_img_3,alpha_img], [(0,0),(1,1),(2,2),(3,3)])
        scv_img = SimpleCV.Image(cv_img_3)
        scv_img.save(disp)

if __name__=='__main__':
    screen_lock = thread.allocate()
    cv_img = cv.CreateImage((640,480), cv.IPL_DEPTH_8U,4)
    cv_img_3 = cv.CreateImage((640,480), cv.IPL_DEPTH_8U,3)
    alpha_img = cv.CreateImage((640,480), cv.IPL_DEPTH_8U,1)
    img_bytes = array.array('c','0'*640*480*4)
    img_address = img_bytes.buffer_info()[0]
    disp = SimpleCV.Display()

    kinect = pykinect.nui.Runtime()
    kinect.video_frame_ready += frame_ready
    kinect.video_stream.open(pykinect.nui.ImageStreamType.Video, 2, pykinect.nui.ImageResolution.Resolution640x480, pykinect.nui.ImageType.Color)

Wednesday, April 18, 2012

in defense of the generalist

        A while back (years almost, come to think about it), I made a post on a friend's facebook on why I thought generalist as it related to Tech Art was probably a bad goal to pursue in the long term. I'll admit at the time I was definitely not in the best of headspaces and not at all thinking straight.  I'd also been through a few interesting work stints that had me questioning the value of Tech Art in general, or in other words I was feeling sorry for myself and projecting onto other people, trying to "save" them from a professional fate worse than death.  Or something to that effect.  But as I alluded to in my post the other day and as I remarked (or maybe admitted) to the same individual who i had previously publicly lambasted for being a generalist, if it were not for my own background as a generalist, there's no way I could be doing the job I'm currently attempting to bungle my way through...


Specialist-vs-GeneralistSpecialist-vs-Generalist2
As with many things, happiness is found somewhere in between...

        It's true, at every stage of my career I've picked up more skills and honestly, the only reason I ended up where I was doing what I was doing was because I felt it necessary to fill a niche in production.  My first job was actually as a VFX intern, but I did some facial animation, rigging, cinematics, modeling, hell, I even have a sound credit from my first job, of all things.  So I, of all people, definitely appreciate the value of the generalist developer.

        Now, I'm probably not telling anybody anything that they're not already thinking, or more correctly, haven't already realized.  At the same time, I feel like I've been one of the more vocal proponents of the Tech Artist as Tools Programmer, or at least the notion that Tech Artists should specialize in something at some point.  And yet, here I am, finding myself not really specializing in anything, or even more ironically, specializing at being a generalist...talk about taking the red pill.


Alien
Just take both of the pills, seriously, the purple pill is f'ing wicked...

        I guess what I'm really saying is not to specialize or generalize, but more like don't pass up an opportunity to learn new things.  One of the most disheartening statements I'd ever heard uttered was something to the effect of "Well, i'm working in Max right now, I don't need to learn python".  Yeah, but you could!  It's not about learning python, or C#, or C++, or Lisp, or MSP, it's about LEARNING.  When I was at Bungie, sure, I was a Maya guy through and through, but that didn't stop me from taking advantage of the fact that I was working with some great C# developers and learning C#, a skill that's serving me really well now.  And once you're comfortable with that new skillset, Apply It!  Take some time and come up with some creative solutions, in the process you'll gain a deeper understanding of said skillset.  I started learning Django just for fun, but in the process, I realized what a great asset management system Django would make, or to be more specific, what a great asset management you could write on top of Django.  Being able to apply technologies in new ways is a great way to keep your interests up, especially if it's technology that already exists and is just waiting to be put to a creative use.  As Grandmaster Harper says "Take What's In Front Of You".


bloodymayhem gregharper_200x20
Tech Art...The Kajukenbo of Game Development

        The industry being what it is, you never know what your job sitch is going to look like in the next few years, heck even in the next year.  The skills you learn now are going to protect you for your entire career.  Who knows?  If it ever comes down to it and you have to jump from content creation, the fact that you used Python to do more than just write Maya scripts may be your saving grace.

        So in the end, expand your horizons, learn all you can, and embrace your generalist self.  I'm definitely not sorry I did, given the dividends it's paying now.

Saturday, April 14, 2012

prototyper's toolbox: pythonic edition

        I had a really good convo with a co-worker today and I realized I miss working in Python just a wee bit.  For prototyping especially, why wouldn't you use Python?   I'm surprised the thought hasn't really hit me until just now, but you know, the fast turnaround provided by working in an interpreted environment is fairly ideal for rapid prototyping, no?

        As much as I like Unity, it's definitely getting to the point where i'm having to do enough custom implementation that it's putting a damper in my relevant iteration time.  Don't get me wrong, i'm enjoying getting to dip my toes into other languages and features (learning P/Invoke has been pretty cool), but i feel like a Python based prototyping environment might get me a little further a little faster.  More often than not, Python modules work in whatever your Python environment is, and I understand that's not a 1-to-1 comparison as Unity is not a pure .NET environment, but still...

        So in my wanderings last night, I dug up a few different tools that seem like things I'll want to be diving into next.  I went to bed super early, so I haven't spent a ton of time with alot of these resources, but enough to feel like these will all be useful in the future.  Once i get done with this current demo cycle, I'm going to get out of Unityland and start writing standalone apps, just to get more practice writing big software projects.  I feel like I'll have more flexibility in a pure python environment and I'll probably get more done faster. Props to my co-worker Chris for inspiring my search...



Modules/Libraries
        Most of these provide the full suite of lower-level functionality you need for prototyping/building media applications (event loop processing, input management, etc), just import whatever else you need alongside them and you're good to go.  Fair warning, some of these are NUI/MT specific, but then, that's really what I'm into these days.  I gotta say it's also made me realize how much of a gadgetwhore I am, but man, I hate spending money on them, which is probably why i don't have one.  That and it also makes me realize how much of a touch interface freak I am.  I can't say I'm huge on gestures, but touch is cool.  I actually found a fairly extensive list of game engines that would probably be make great generic event loop/input/rendering managers too, that's for another blogpost...

android-pythonPython for Android
I'm not 100% sure why i put this here but it seemed like a cool little aside. I only glanced over the docs a bit, so I can't make any recommendations, but I will say it's definitely more of a power tool than a ready to go development resource. More and more i keep finding reasons to want to get that Android device i keep threatening to get, but then i think, ugh, do i really want to spend money on another gadget? "Another", like i own a bunch of them already...

pygame_projects pygame and PyKinect
I lumped both of these together because most of the PyKinect samples run on pygame anyway, altho nothing precludes you from using your event loop manager (heretoforth referred to as ELMs) of choice. While this isn't necessarily MT, if you want to get started with NUI prototyping, this is honestly the way I'd recommend going. The Kinect SDK is super easy to write code against, you'll be doing crazy things in no time.

vpython_projectsVPython
Alright kids, gather around, and let me tell you a tale of POVRay...yes, I just dated myself horribly, but then, i'm constantly getting told that I'm too old for...certain people anyway, so i guess it is what it is. But really, that's about what we have here, even to the point where VPython lets you export to POVRay renderable files. How cool is that? Also includes its own version of IDLE, but I'm going to have to frown on that...

kivy_projectskivy
From what I understand, this is the way to go for Python MT development. I found a few other options (PyMT, etc), but this seems to be the one everyone recommends. If you're an experienced pythonista or have an established development environment, you can skip some of the weird setup requirements and dive right in. Get started now and whip some cool stuff up for their next contest!

tuio_projectspyTUIO
No discussion of MT development would be complete without including some TUIO bindings, so here you go! There are a few dependencies you might want to be aware of too (most notably reacTIVision), but once you've get everything down, this is another great way to get up and running with minimal hardware investment and setup. Make your own Reactable, actually, i've just hit on a plan...



Art IDEs
        Sure you could sit in a coffee shop and write a book, but let's be honest, if any chick looks over your shoulder and sees you rockin some cool interactive art in one of these apps, there's an icebreaker. Show her your camera or other input device, get her to play around with it a bit and it's all over except for the part where you embarrass yourself horribly trying to ask her out...Probably better to just get a bunch of buddies together and rock some interactive art jams on your laptops, takes me back to the days of laptop jamming with Live.  Or at least I think it would, i'll know more tomorrow, actually going to a group Processing session of sorts, should be fun. May even be the beginning of my social life...

nodebox_projectsNodebox 2
If you're a Maya or Houdini Tech Artist, or just an ICE freak, this one's for you.  I actually came across Nodebox a while back, but got a sad panda because it was MacOS only, altho I suppose it would have been a good use for my still not very used Macbook Pro. But yeah, this is basically Processing with a hypergraph attached to it, how cool is that? And Python to boot, so it's like Maya, but light and stable...ish.

shoebot_projectsShoebot
Shoebot is the best-ish of both worlds in that you can run it from its own IDE or you can import it into existing Python projects. It's also a...well, let's say derivative, for lack of a better term, of Nodebox, so most of the docs you find for one apply to the other. Not to down on the work of the Shoebot guys, it's a "derivative" like pyprocessing or pycessing are derivatives of Processing.



Additional Reading
        A few last little tidbits to keep your head in the MT game, skip this if you're not interested in this sort of thing.

txzone_header31Txzone
All sorts of really fun musings here, not much of a learning resource, but great for inspiration and keeping up with what all's out there for the MT pythonista...


pymt_projectsGetting Started With MT Dev In Python
Some dated and slightly specific tutorials, but definitely great for a quick dip of your toes into the whole world of Python MT/NUI development.


Part 1 | Part 2 | Part 3 | Part 4




        I've got some pretty cool interactive art projects in the pipe right now, no ETA on any of them yet, but hopefully I'll be able to post some stuff up in the near future.  Got some longer term "milestones", altho I'm not sure milestone is the proper term since they tend to equate to public releases or showings of some sort...Nothing like a little pressure.

Wednesday, April 11, 2012

with trees

        Updated to include Teck's suggestions so it looks less like machine and more like man(god?).  Now we get something that looks like this:

treecut2

Yes, this looks much better methinks.  Only slight changes, just dropping in new terms really:

Tuesday, April 10, 2012

there is unrest in the forest

        You know, the thing with Unity is not so much the presence or lack thereof features, it's the same issue alot of software has, and that's the lack of not just documentation, but useful, relevant documentation.  I feel like too many companies assume that it's ok to just put out basic documentation for new users and then let the community pick up the slack.  That's alright I suppose, but that doesn't mean you can slack on the basic documentation.  For instance:
function GetHeight (x : int, y : int) : float
Description
Gets the height at a certain point x,y
Really?  A function called GetHeight() that returns a height value?  No Shit?  How about some information about the input parameters?  What is the expected range of values, for instance?  How about the return value, what do the units come back as?  You know, little things, back to that whole idea of useful and relevant, right?  Sheesh.



...Captain Obvious is Obvious, Captain...

        My ire comes from a perfect case, one of today's tasks was to spawn trees on a Terrain based on the heightmap.  All the pieces are there, it's actually quite a simple process, but again, it's just not knowing the little things that results in probably a bit more experimentation than necessary.  Really I'm just whining, but i've run into so many cases of this sort of thing with so many software packages I've used (looking at you, Maya), that at some point you just wanna contract out and document the stuff yourself for them.

        So rant over, if you're interested in doing this sort of thing yourself, here's some code that might be useful:



        Obviously you'd want to add some sort of noise to the tree placement and maybe even add some simple tree variants so it doesn't look like a military haircut, but this code produces a good starting point, something like this:


treecut

        You could probably get even more headway by sampling other maps or attributes to add a bit more spottiness to the placement or maybe even do a radial tap per sample step, something like that. Dunno, if you make any useful changes to this, let me know! Otherwise I'm just going to do it myself and...well, i guess that's not that bad.

Friday, April 6, 2012

thoughts with sounds

        Nothing really earthshaking here except whatever you might come up with on your own;)  Haven't blogged in way too long and left some things hanging, which I promise I'll get back to in the near future, just had a real interesting time of late with some slight work randomization in the form of the build machine, but it's been a great project!  I'm actually really intrigues to setup a python tools pipe that uses some form of CI, minus the build step...or maybe plus the build step for custom extensions.  Dunno tho, this is the sort of thing I think i need to do in moderation, hehe, not to slag on past lives or anything, but dipping my toes back into pure pipeline work reminds me that I like not having to do pure pipeline work all the time!  Given some of this new info, I think I'll probably push back part 2 of the Unity/Git tutorial and expand it to being a complete Unity/Git/CI tutorial once I get Unity builds figured out.  Shouldn't be too hard...

        Taking a page from my co-worker Chris, I've started gathering little snippets of media on youtube and vimeo that I can draw inspiration from, I'm realizing I need to shift my focus a bit to really stay up in this game.  It's such a different thing we're trying to do here, ideas need to come from different places.


kuato
Taking a bit of advice from this guy...

        One of the more fun videos came by way of another co-worker, which is driving some other stuff I'm thinking about right now, too.  Looking at this sort of thing does really solidify how much I enjoy having tactile components beyond just a touchscreen.  Something to keep thinking about.  Check it out, fun times:



        Hmm...lastly I started playing around with another prototyping framework, trying to find a good sandbox for doing sound driven gfx or gfx driven sound or whatever other combination of those things I can come up with.  I gotta admit I was a little charmed to see it come up in a tk windows, of course my first thought was "aww, i miss IDLE".  But yeah, check out Pure Data, it's a bit sparsely documented, but it's fun.  If you're familiar with patch-"programming", you'll feel right at home here (without having to pay for Max/MSP!).


pd640

    Ok just kidding.  Lastly for real, here's some fun generative audio stuff to get you through any boring times you might encounter this weekend.  I wonder what an evolution on this idea of generative music might look like...



Otomata - Try this online!


Tonematrix - This one too!

Friday, March 23, 2012

NOW I remember what being happy feels like

        It's 10:30pm on a Friday and I'm sitting in my apartment mere blocks away from the downtown San Jose nightlife working.  And not working on anything really "cool" or "sexy", at least not to most people, in fact it's about as dry as you can get.  I'm working on a comparative analysis doc for different software components of the continuous integration system we're putting together at work, and I'm having a blast.  Realizing this made me stop and think, not about the specifics of the task itself, but the implications of the situation.

        My friend and fellow tech artist Rob Galanakis made a blog post a while back regarding a change in jobs/life he had gone through recently, and I guess it's my turn now.  It's so funny, you know, I'm the guy that's always saying "Ah, it's just a job, work's just a passing phase, etc", but you know, looking back I spent so much time propping myself up that I don't think I realized I wasn't following my own admonitions.  Even when I was in L.A., I feel like, in retrospect, that I was the guy who was married to someone who he wasn't really that into, but publicly proclaimed how incredibly hot his wife is, in a really feeble attempt to make himself feel better about it.  Clumsy analogy, but you get what I'm saying, that is, i feel like i've been deluding myself (poorly, I might add) in an attempt to miss the bigger issue.

rdirty6sibs5
"That's delude yourself, dummy..."

        Maybe it's too early to tell, i've only been with Perceptual for a month, still in that honeymoon phase, but you know, I've never felt this sold after just a month.  That could just be because I'm so happy to be doing something different, but I think it's more than that.  I've told everyone this and it's true, this is seriously the hardest job I've ever had.  I'm having to pretty much draw on every bit of software development experience i've gathered in the last 12 or so years just to keep up, but you know, I wouldn't have it any other way.  Not being challenged at work is worst than death, and I mean GENUINE challenges, not the challenges of having to manage your managers.  I have ONE manager now who I report to, and by report to, I mean i tell him what i'm working on for the month and if it aligns with the groups charter, he gives me his blessing and doesn't bother me again for a month.  And when people ask my opinion and tell me to go do something, they mean, "we trust you to leverage your experience, so we're just going to get out of the way and let you execute".  Having been paid that lip service for the last 2+ years, this is a refreshing change.

        Ultimately, I know we're sitting in the pressure cooker.  In a year, we may not be around, but you know, I'm going to try my damndest to make sure that doesn't happen.  And you know, i feel like at this point, I actually can do that.  I feel like my contribution matters, I feel like my contributions are actually valued, I feel like I really can make a difference.  I haven't felt that at a job in...about 4 years.

        And you know, the most telling part overall is that at a high-stakes, high-pressure team inside of a large company like Intel, I've found that working environment I've been looking for.  The one the games industry claims to espouse, but really doesn't have a clue about how to create...that's interesting.

       Ah well...not gloating, not whining, just thinking.  It's good to be happy at work again.

smiley-face-wallpaper-016
Not quite like this, but close enough considering...

Tuesday, March 20, 2012

prototyper's toolbox

        Nothing special here, just a bunch of tools I've been using recently to do rapid prototyping.  It's such an alien concept to me to just throw a bunch of hardware and software into a blender and see what comes out, but it's pretty freakin fun.

banner
Unity
Probably needs no explanation, really good platform for prototyping as it's open enough to do things that aren't necessarily games.

oF_0
openFrameworks
Everything you need to create rich interactive applications under one roof. This is the model for what easy-to-use SDKs should be.

processing_cover
Processing
It's like an IDE for doing cool graphical stuffs. For you Pythonistas, check out pyprocessing as well.

vvvv
vvvv
Just found out about this the other day, node based madness.

ArenaLogo
OpenNI
Definitive framework for natural interaction (gestures, etc). A bit of a learning curve, but good to know.

        So what are some of your favorite rapid prototyping tools?  And if you say UDK, i swear i WILL hunt you down and punch you in the heart...

Saturday, March 17, 2012

git some!

        So i spent a decent chunk of the day mucking around with Unity projects and version control...well ok truth be told, it's actually been a few days, first with SVN and now with git.  Now, obviously you're not going to get all the DVCS benefits with content, but being able to branch scripts could be pretty cool.  We're not doing the sort of focused development that's really going to require that I don't think, but who knows?  I have some folks that I want to get up to speed a bit on Unity Scripting, so having safe sandboxes for them to script in is definitely a plus.

hydra
For every branch you merge, two more...

        Github's been unreachable for a few days and I needed to get up and functional pretty quick, so I went with bitbucket.  God knows I pull enough code from bitbucket, might as well join the revolution.  Also free private repos, which is good because I may or may not be posting somewhat sensitive content.  So, there are a few things we need to do first, I'm going to assume you have Unity installed, if not, think about doing that at some point ;)  Other than Unity, you'll want to get git on your machine too.  For Windows users, you'll want to grab two software packages:
  • Git for Windows (msysgit) - You'll want to grab the most recent Git-1.7.x-Preview file, at the time of this writing it's 1.7.9
  • TortoiseGit (optional) - If you've ever used Tortoise, you'll know what this is, otherwise, it's a very easy to use git windows shell extension.  Most of your day-to-day will happen here, unless you just love command lines...and there's nothing wrong with that.  I'll be covering how to do everything through TortoiseGit where possible here, but again, you can also do it all with the command line/GitBash.

git-logo Git_tortoisegit_logo

        Make sure you install msysgit first, as TortoiseGit obviously requires it.  I would uncheck the Windows Explorer integration option, but otherwise it's pretty straightforward.  TortoiseGit is equally straightforward, make sure you install OpenSSH vs the Putty option, as we'll want OpenSSH for bitbucket.  Speaking of, grab yourself a bitbucket account too, the free 5 user account is more than sufficient for what we're doing, unless you have a bigger team that's going to need access to the repo.  In that case, i might still only leave the number of users on the bitbucket repos fairly small and set something up to push to a different repo for a larger group (hmmm...another blog post??).  Be sure to refer to the bitbucket Docs on setting up git, as well.  You'll also need to make sure you're SSH friendly, which is a bit of a process.  Thankfully, it's also documented on bitbucket, and they do a much better job of explaining than I'm going to.  You can skip step 5, altho it is a cool little trick.  So the whole process then, looks like this:
  • Create a bitbucket account
  • Install Git for Windows - bitbucket Documentation
  • Install TortoiseGit (optional)
  • Setup SSH - bitbucket Documentation
  • Create a a git repo on bitbucket
  • Create a folder for your Unity project
  • Create a new project in said folder (don't import any Unity assets)
  • Enable external version control in Unity
  • Delete the Library folder in your project
  • Create a git repo in your Unity project folder
  • Commit the changes
  • Add your bitbucket URL to your local repo settings
  • Push the local repo to bitbucket<
        If you're familiar with TortoiseGit, this is a pretty straightforward process, but I'm guessing for alot of the readers, that's not the case, so in the next post, I'll step through the rest of the process (with pictures).  There are alot of steps, but they're not long involved steps, so the whole process actually goes pretty quickly.  I just got lazy because i've spent the last two days taking screen caps and I'm realizing that i need to rethink how i do my images for this tutorial, so Stay tuned...

recycled