Ayer_animation

Concept…
Initially, me and Sophia were thinking to create something abstract and we thought of developing our story based on poetry. The concept behind is based on a chilean novel “Ayer” by Juan Emar. According to Sophia, the whole book is the description of the day the main character and his wife spent the day before of the day is being narrated (meaning, yesterday or in spanish “ayer”). We selected one chapter of the book where the couple goes visit a painter friend and they go on and on about colors and how we perceive them and whats their effect on us and how basically you could explain life itself just by analyzing colors.

Result…
Well…the result is not perfect yet since we tried soooo hard to find the footages that we could use to imitate the brush strokes…at the end, we found png that seem the best to represent our concept…but we will continue to work on it..

Please see the current result down in below.

Click to view our final cut…

ICM Final

Concept 1: VJ experience
The very earliest concept is to create an self-enjoyed environment in a 3D space. This immersive environment has video fed in behind which creates a VJ experience… and connected with VR devices..

 

Concept 2: Identities & Surveillance
Gazing others seem to be a normal behavior; however, most people feel uncomfortable when they are being stared too long. What if you are being stared without noticing? This is causing people feel insecure.  you would not know where do people’s gaze at when they are wearing sunglasses. You do not know if you are being watched. We thought if we have multiple players connected through different webcam and feed all of the webcam randomly to all the spheres.

While brainstorming….

Have the WebRTC worked successfully on both ends…

 

Concept 3: Uncertainty/People watching others and being watched and lost in the reality
.. So in order to simulate this situation….we came up with a chat room that the users think they are chatting with the others in live, but those are pre-render videos. To set up a network issues would foster the environment becomes more real and in the current time.

 

After user testing… we got so many feedback…
ex.
*pick a better background which does not tell the location and time
*if the actor and actress were scripted might be more real
*should invite the user into the conversations

 

Final Version: We took some of the advices that in order to be more “real,” we should have a start button before the user enter into the chatroom. So, we created a disclaimer page for the user before they began..one reason was because while user testing, some people took it seriously when they knew the whole experience was fake…so we tried to have some sort of notice before the user begins. Also, this will allow them to begin the experiment. Later, the chat began…and after 40 seconds, the user’s recording would shift to the left one and replace its previous video. Then, there will be an alert asking the user if they want to download it and tricked the next user. If they clicked yes, files would be downloaded to their local folder, if not, the experiment will start over and go back to the disclaimer page. So, each round, the recording will be saved and feed into the next screen from left to right. Eventually, new user will be tricked by the previous users and got confused.

 

Code Setup
This repo sets up and runs a fake video chat site that loops webcam-feeds of users. The aim is to explore the nature of virtual video communications: How do we establish communications online with strangers that we see and hear – but do not know whether they are real or not? How is trust established in a virtual group experience? How do we perceive our role in a real (or fake) group? How do we feel about unreal experiences? What makes us willing to participate in fake schemes?

Prerequisites
run python3 -m http-server in repo folder
edit move_files.py, replace folder names with corresponding local folder structure
run move_files.py in background to move around recorded video bits (use sudo if needed)

Code
The start page is index.html that asks users to agree with terms of use. The button at the bottom is linked to vid_chat2.html – which sets up the virtual chatroom with main_vid_nu.js. In main_vid_nu.js we use webRTC to record a stream from the user’s webcam. This is positioned in the middle of the screen, the two side screens play back two pre-recorded videos (maxresize.mp4 and resizeBav.mp4). Those videos display the (recorded) webcamstream from two actors. They seem to try to establish a communication between each other. These videos are used only in the first two rounds. After that two streams from the webcam have been recorded and get fed into the left and right screen. This “replacement” is repeated in every round, the videochat is looped with user-videos. Before this takes place, the user will see a recording of his own webcam from the last 40 seconds popping up on the left screen. After a few seconds a popup-alert asks the user if she wants to fool other people with her own recording. If the user agrees, the video gets downloaded as a video file with a specific name (repl_left or repl_right). The websites defaults back to the home screen index.html file. The move_files.py script continuously waits for videofiles to come into the downloads folder. If a new file comes in, it moves the file to the main folder and stores the name of the last downloaded file into move_vids.txt (located in the main folder). This enables a form of tracking of which side of the screen (right or left screen) should be updated with the last video – right and left side of the vid chat are always fed the last recording from the last user in an alternating pattern. Python is necessary to circumvent the security restrictions in JS.

This is probably a very complicated way to run such a chat app locally. It is used here to quickly prototype the idea of a fake video-chat that loops in itself.

Source-code
recording from webRCT – code based on https://github.com/webrtc/samples/tree/gh-pages/src/content/getusermedia/record
video screen – code based on https://github.com/mrdoob/three.js/blob/dev/examples/canvas_materials_video.html

Progress…

Material…
Helmet: plastic dome (too expensive), fabrics with wires
Wood board: to mount the motor
Motor: DC or Stepper motor? and with Bike Chain??
Projector: LG Mini PF1500
Angle Sensors: rotary sensor??
Connection between motor to angle sensor: using Bluetooth/wifi
Things to hang: fish wire ?? or something stronger


Concerns…
1. what should be the min distance away from the projector to the surface
2. projectable surface

Project Structure…

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Video Sketch…
Reference

Final Proposal_Immersive Livestream Concert with VR Gear

Thoughts…
Last week, I made a cube that was 3-dimensional which constructed with multiple boxes rotating. Each side of the box is built with camera capture so that the person is see himself/herself inside a 3-D world in 360 degree. Also, the user is able drag the mouse to see the 360 degree vision of the space. In the back of the cube, there’s video playing which allow users to have a thrilling vj/sound experience in 3-D world.

Previous Project…

Development…
Improve the surface and the environment. Also, one user is inside the VR world, they can interact with the content.

Support sources…

I found an library that allows javascript sketches to run in unity, which is called A-Frame and I found an example that is very similar to my idea, click

Reference

Stop Motion Animation

Concept…
Our initial thoughts were using simple materials, such as strings, rice, or beans to create our stop motion animation. Aesthetic is the feeling we want to give it to our audience. But it turns out to be something that we didn’t expect. We ended up using pcom materials to create our storyline. After deciding what material we are going to use, we generally talk about what kind of story could be included. We basically go free without having any fixed storyboard. We literally come up with the story while shooting the animation.

Challenges…
The challenge was setting up the equipment and get to be familiar to the software. We spent about an hour to just setting up and be ready to shoot. The shooting went pretty smooth and our ideas just come up right away along our shooting. Even though the shooting was fun, I didn’t realize was that time consuming. I ended up have about 300ish pictures but this is only for 30 sec stop motion animation…

During the shooting…

 

Rough Cut


Final Cut with Audio

Final Project Proposal

Concept…
One of my previous project was exploring how the brain perceives and obtains info based on our ingrained belief systems. How our senses can sometimes lead us away from what’s true. This project requires a enclosed room that create a intimate experience. I know this will be the biggest challenge for our limited space on the floor. SO, I decide to go for different approach.
View here
Go for different approach…
I’m thinking to make a giant head cap that hang on the wall and users have to wear it in order to experience it. Inside the giant head cap, I’m thinking to have a little lantern (carved with shapes on the surface) on top of user’s head. When user wears the cap, the light will turn on and rotate (install servo) and project the patterns on the surface inside the head cap. On the surface, I would like to install face detection sensors, so if the sensors move around inside the head cap, the servo follow user’s eye vision.
The pattern will transform based on the timer depends on how only the user is wearing the head cap. After the experience, I would like to ask the user to write down what they see inside the head cap and write it down on the paper I provide and stick it on the outside of head cap (will continue to collect).
Challenges might come up…
I can already think of some problems and challenge for this project. For example, the position of the lantern might not perfectly projecting the shadows right in front of user’s vision. The angle might need to be reconsider. Also, ever user is in different height, I might have to adjust the height of the head cap in order to fit for each user.

ICM week 8: Immersive vj experience

Thoughts & Ideas:
I want to create an immersive experience for user to have their own world to enjoy the music and visual. It’s mimicking the concept of VR world which allow the user to virtually participating the event inside the screen by using camera to make the event present in live.

However, I encountered the problem of installing the video/GIF to the background. Then, I had to first move the center of the camera outside the cube. Then, I will be able to calculate the distance away from the cube, which will be visible. If you thinking 3 dimensional, by going further away from the cube is actually gradually moving negative in z axis.  After understand the concept, I translated the position of the video to negative so it became the background.

Problems:
While adjusting the position of the video, I realized “translate” is actually accumulated. Since I translate my center in draw function, my sketch is kept looping and that might be the reason why translate is being accumulated?? Also, I have two different shapes in this sketch, box and torus. Right now, they both are reflecting the images from live camera captures; however, that’s not what I want. If you look into my code, you will see that only the boxes are in texture of camera captures but not toruses. I wonder this might share the same problems with my previous challenges but I tried to have different texture for toruses, somehow did not make any change.

Demonstration Video:
Immersive World Version1

Final Version:

P5 version: Click

Code:
var video;
var bg;

function setup(){
createCanvas(800, 600, WEBGL);
background(0);
video = createCapture(VIDEO);
video.size(200,200);
bg = createVideo(“beats.mp4”);
bg.loop();
bg.hide();
}

function draw (){
// video(0,0);
background(0);

normalMaterial();

ambientLight(0, 0, 10, 3, 3, .8);
pointLight(300, 300, 300, 300, 300, 1);
// image(bg,0,0,width,height);

translate(0,0,-600);
push();
translate(0,0,-2700);
texture(bg);
box(3300,2800);
pop();

var radius = width * 1.5;

orbitControl();

// translate(0, 0, -600);
for (let i = 0; i <=12; i++){
for(var e = 0; e<=12; e++){
push();
var a = i/12 * PI;
var b = e/12 * PI;
translate(sin(2 * a) * radius * sin(b), cos(b) * radius / 2 , cos(2 * a) * radius * sin(b));
if(e%2 === 0){
push();
// fill(80,20,64,64);
torus(40,40);
pop();
}else{
fill(150,50,50,30);
texture(video);
rotateZ(frameCount * 0.01);
rotateX(frameCount * 0.01);
rotateY(frameCount * 0.01);
box(85,85,85);
}
pop();
}
}
}

Final_Thoughts

Final cut:
Tai Chi Master

Thoughts:
The most challenging thing during our entire production process was dealing with the quality of our video sources..We tried very hard after our first draft by going to Chinatown to shoot more footages. However, those footages just adding the variety but not the content, which still better! Even later we went into Audition to edit the audio really hard, such as adjusting the equalization and the volume of the background music…the cracking sound of the mic disappeared; however, the background music still competing withe vocal…which disappointed me.

Photos during our shoot:

Midterm: Smart Candy Vending Machine

Working Finally with the demo audio files…3am on the floor…

Another video to show (and make sure I have it proved in case it’s not working later)…

Both ultrasonic sensors received the same values….(Found out it was the problem of ordering the codes)

One sensor is not working…(Spent more than four hours to debug the problems of not receiving values from both sensors…)

Prototype of my Smart Candy Vending Machine

 

Arduino Code:

/*
* Ultrasonic Sensor HC-SR04 and Arduino Tutorial
*
* Crated by Dejan Nedelkovski,
* www.HowToMechatronics.com
*
*/
// defines pins numbers
const int trigPin = 13;
const int echoPin = 12;
const int trigPin2 = 4;
const int echoPin2 = 7;

// defines variables
long duration;
long duration2;
int distance;
int distance2;

void setup() {
pinMode(trigPin, OUTPUT); // Sets the trigPin as an Output
pinMode(echoPin, INPUT); // Sets the echoPin as an Input
pinMode(trigPin2, OUTPUT);
pinMode(trigPin2, INPUT);
Serial.begin(9600); // Starts the serial communication
}
void loop() {

// Clears the trigPin
digitalWrite(trigPin, LOW);
delayMicroseconds(10);
digitalWrite(trigPin, HIGH);
// Sets the trigPin on HIGH state for 10 micro seconds

delayMicroseconds(10);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);

digitalWrite(trigPin2, LOW);
delayMicroseconds(10);

// Sets the trigPin on HIGH state for 10 micro seconds
digitalWrite(trigPin2, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin2, LOW);
duration2 = pulseIn(echoPin2, HIGH);

// Calculating the distance
distance= duration*0.034/2;
distance2= duration2*0.034/2;

// Prints the distance on the Serial Monitor
//Serial.print(“Distance1: “);
Serial.print(distance);
Serial.print(“, “);
//Serial.print(“Distance2: “);
Serial.print(distance2);
Serial.println();

}

 

 

Serial inport from Arduino to P5 Codes
Controlling the height of the wave and trigger the audio files according to the distances..

Overall…I spent hours and hours to just troubleshooting not receiving values from the sensors and unable to detect the port from my Arduino UNO…which is the simplest problem but you never know how long that can take up to…