Hacking the Browser final proposal by Sebastian Morales

For my final project I have two ideas. I should probably stick to one but what the heck. 

Idea no. uno!

Have you ever thought about how the future affects the present? Hmmmmm. What??? What if events happening in the future could ripple their consequences into today... I am afraid that if I keep going deeper into this subject I might start making terrible youtube videos about how the government is traveling through time to stop free thinking, free energy, elect trump and hide extraterrestrial evidence from the rest of america. 

Anyways. How would I make this into a chrome extension? in a week... "a week", I guess time has a different meaning if you can move freely through it. Ahhhh! Back to chrome extensions.

Spoiler alert: [I am not really sending messages from the future.]

What if searches from the future could show their results today? What if one day you open a new tab and see this:

Weeks later, long after you have completely forgotten about that Sri Lankan politician, you open a new tab only to be redirected to a wikipedia page:

What do I need to achieve this?

  • First learn how to use the wikipedia API, find articles and extract the description out of them
  • Store this search terms/pages in a server or a database for the future redirection. 
  • A chrome extension with basic new tab redirect
  • Some js program in the extension to decide when to redirect you to a new hint from the future, or full search. 

Idea número dos

This would combine my class of Rest Of You with Crazy DanO as well as some pcomp madness.

The idea consists of utilizing galvanic skin response sensors to constantly monitor your internet experience. If the sensor detects a rise in arousal over a define threshold, it would send a signal to the chrome extension to record the value of arousal as well as a screen recording of the active tab.  

Constant Galvanic Skin Response Measurment 

If the reading goes past a threshold save screen capture to server. Also record reading value. 

  • The most complicated part for this project, based on my expertise, might be get the arduino constantly sending values to the chrome extension. 
  • I would also need access to the tabs and all websites
  • A way to save images into a server or data base. 

 

Just Not Sorry by Sebastian Morales

This is a quick analysis on the Just Not Sorry Gmail Plugin chrome extension. 

What does the extension do?

It matches words or sentences as you compose emails to a list of predefined insecure words. If the match exists, the words are undefined for the user to become aware of their language use.

questions for our Guest 

One of the developers for the extension will visit today in class, here a couple of questions I have for him:

  • What is the "_metadata" folder, I wouldn't let me load the extension. 
  • Google analytics? Trying to track how many users? What kind of info do you get from this. 
  • Storage? 
  • What is going on in the script loader? Why have it instead of naming them on the manifest?
  • update_url?? In the manifest

Example of JustNotSorry in action.

Getting 3D models from Google Earth by Sebastian Morales

Update: There have been some questions about apple maps vs google maps and which will return better results. I haven't yet tested apple maps but judging by the picture quality I am guessing that it will give better results (at least for the texture).

Google left vs Apple right

Google left vs Apple right

Original Post

Thinking about it, this method can be applied to a lot more than google earth models... In this particular case I just wanted to get the corner of a particular building in the city of New York. 

In the past I remember people using programs like 3D ripper that would try to capture the geometry directly from openGL, I actually tried it once but without any luck. The other problem with that approach is that you need a windows machine. 

In this method we will use a photogrammetry approach. 

Start by scouting your building.

Identify what you want to capture and what is irrelevant. The clear the idea here the better the chances of success. 

Start a Quicktime screen recording 

Try moving at a regular speed about the object of interest, I would say that you have about 1.5 minutes to capture all the geometry you want before you run into problems afterwards. Maybe you can push it up to 2 or 2.5 minutes. I really haven't pushed the method to it's limits.

Move around and make sure to get all the different angles you may need. 

A good tip here is to only capture the section of the window with no words, logs or icons, this will save you time later and increase your chances of success. 

This is also one of the reasons why I like using google earth better than google maps, you can turn all icons off. 

Here is the actual video recording I used if you want to get an idea.

 

Isolating Frames

The free version of Autodesk Remake (formerly Memento) will only allow you to upload up to 250 frames. Now, our screen recording is about 1.5 minutes long, at 60fps, it means we have about 5400 frames. Truth to be told, most of those frames are repeated since we were moving slowly compared to the screen recording. 

There are probably a couple of ways to do this but the one I am most familiar is using photoshop. First import the video frames as layers, limit to every 20 frames or so (5400/20=270), let it run and then export the layers as files. This last step might take some time but that is it.  

Remake

This is one of the easiest steps to follow, open Remake, select Create 3D from Photos. Select the images and Create Model. The defaults work fine.

You are almost done but this step actually takes a long time, hours. Go out on a date, have some nice dinner, and get back to work. 

Hopefully if everything went right, the moment you open remake you should be able to open your new 3D model. 

I hope it this was helpful! 

Old Memories I didn't know I had by Sebastian Morales

Going back into memories while revisiting old data which I didn't know I was sharing. 

You can download your own data here

You can download your own data here

There is all kinds of data you can download, from you search history to your entire email, from your pictures to every single move you have done (location). Select the data you want to get and simply download it. This step might take several hours or even days (for me it took a little under 1 day to process).

Different types of data come in different formats, location for example comes as a JSON

Interestingly enough, the last entry for my location was back in 1398884229139, that is April 30, 2014 for those of you who don't keep track of time as ms after 1970. 

What happened then? Why did it stop logging/tracking? 

Let's take a look at the map.

It looks like I left home around noon and walked very slowly to what seems to be my girlfriends (at the time) dorm... Then radio silence.

I decided to check my email to see if there was some evidence of what could have happened.

Well there you have it. Got an iPhone and the tracking stopped logging. 

Before I jump into other things however, i wanted to share some days.

Churros!

Like the day we traveled almost two hours just to get some good churros! Then somehow eneded up going twice to the same restaurant up in the north side. 

...or the day that I was trapped in the US (waiting for my OPT) but Christmas was still happening and all my Muslim and Hindu friends showed up, went to target, bought some frozen Pizzas, had a "family dinner" and ended up at one of the best Blues clubs in the city. 


Thinking about other interesting ideas that could be hidden inside of the massive amounts of data I decided to take a closer look at my email. In this case I was using immersion, a tool developed by the MIT Media labs to portrait your networks of emails.

Screenshot 2017-02-23 12.52.14.png

It looks at whom you are sending and receiving emails from. It actually looks at the From, To, Cc and Timestamp sections of every email. If a particular email was sent to multiple people, then connections start to form among those people, the more emails, the stronger the connections and the bigger their bubble. Take a look at the image above for my last year or so. If you feel like your bubble should be bigger, send me an email (or a hundred). 

For privacy reasons, the Immersion project won't look into the content of the emails, but DanO will. I decided to take a look at the email word count code he made available:

After logging in, it will start analyzing in batches, I am not exactly sure how this works exactly but here some results. Clicking next again will add another batch of words the already listed. The list keeps going for what feels forever but here are the first 50 words. 

Over 300 000 words are listed in what looks to be almost 2400 emails. Not sure about this but judging by the appearance of "www", more important "mailto" I am assuming this represent one appearance per email.

What does this all means?? No clue... but I am reading the secret life of pronouns... maybe something will be revealed. 

 

edit.

Screenshot 2017-02-25 22.25.04.png

I have been thinking about the 14 "the" I write in an email and really find it impossible to believe that my average email has those many "the"s. I am starting to believe that the "mailto" word only appears on replies and forwards, but not when you first send or receive an email. I believe this is true if the code only looks at the body of the email, then the body would only include this information if it had been recorded in the chain.

Midterm Ideas + Isadora HW2 by Sebastian Morales

Midterm Ideas!

For the midterm Roi Lev, Akmyrat Tuyliyev, Ari J Melenciano and me will be working together. For this first week we were tasked with coming up with 3 ideas for projects as well as locations to do them at. 

The ideas are quite challenging but very exciting, props to Akmyrat for coming up with two of them, and two Roi for thinking of the concept for the other. Left to right, for the first one we would install mirrors along the train platform, the mirrors would be pointing to cloud images in the ceiling. 

The second idea consists of projection mapping an elevator in the exact place where the elevator used to be before ITP became the entire 4th floor. Then we could project ITPers throught the history of the program. 

The third idea, by far the most challenging one, consists of remembrance for the catastrophe of the Triangle Shirtwaist Factory. The concept still needs some working due to the importance of the event. If we are going to do it, it needs to be done properly. 

 

For HW2 we had to create a simple patch with at least two scenes and one effect. 

 

The piece has actually 3 scenes, the first two can be observed in the following video.

Study of Pathways Post-mortem by Sebastian Morales

It is that time of the project that rarely ever comes. Time to be critical of what worked, what didn't, and what surprised us. All in the hopes that next time will be much better. 

What pathways did you see?
The pathways observed can probably be divided into two main categories. There was a lot of back and forth motion, a lot of linear movement. This was particularly true of David as he moved around the room. Jade however, tended to move more about the same area, orbiting around in what could be consider circles or eights.  

Which ones did you predict and design for? Which were surprises?
Thinking back, we predicted a lot more of circular motion. But more important, we predicted a lot more collaboration among the users. We expected physical contact between them, at the end, they didn't even touched once. We predicted a lot more of pushing and pulling, perhaps some rolling on the ground and a lot of expanding and contracting, both in a personal but also in a collaborative way. 

 

What design choices did you make to influence the pathways people would take?
It is hard to say if there was a decision that influenced more than the rest but there were a couple that had a lot of weight.  Moving the kinect from the ceiling to the wall in front of the performers had an immediate effect on how they would move. It literally shifted gravity, the range of possible movements. In retrospect, perhaps not really a conscient design choice, showing the performers on the screen in front of them really affected the way they moved.  They seemed to be more interested on how the technology was capturing the movement than the movement itself.  

Thinking about design choices it is relevant to talk about the code, even if it did not turn out as expected. The idea was to make a polygon by joining different body joints of the two performers. By showing previous polygons, the performers could see the history of their movement. This is important because it makes them aware of how their motion is not limited to space but extends through time. The visuals are a consequence of the movement but in turn these inform future possible motion.

What choices were not made? left to chance?
We only designed the interactions involved with one or two people, so the third person's joints would not be shown on the display. And the joints selected to form lines were only left shoulder, left wrist, left hip, left foot, since we thought people might move these joints a lot. However, when the users started, they waved the hands and walked around to discover the space, with little focus on the shapes they formed.

What did people feel interacting with your piece? How big was the difference between what you intended and what actually happened?-Jade

We intended to project the screen to the wall which faces to the users, but due to the equipment locations, we could only project it on the floor. In this way, they firstly expected to see some visuals shown  on the floor, but it seemed hard to understand the connections between user behavior and the projection because the visuals projected were reversed. We didn't expect people to pay attention to the floor, but instead, we hoped they could watch the visual changes on the two computers. It might have affect how long people may understand the interactions.

After we suggested them see the computers, people could soon get the idea. But one of our programs with floating curves can only catch one user's joints and thus couldn't show an enclosed shape, while the other one showed a changing hexagon. We also intended that people held their hands together, and touched each other's foot, but people tended to stay away with each other. And the shapes they formed became much wider.

Provide BEFORE and AFTER diagrams of your piece:

Performers on the floor, connected by foot-hand action

Performers on the ground, connected by hand-hand foot-foot actions

After:

Performers detached, walking and moving in very independent ways.

Alternative motions considered:

Code:

https://alpha.editor.p5js.org/sebmorales/sketches/rypE_wAdl

https://alpha.editor.p5js.org/Jade/sketches/BkfE2U1Yx

 

Important Acknowledgments:
Professor Mimi Yin  
Tiriree Kananuruk for the documentation
Lisa Jamhoury for the development of Kinectron
Class of Sense Me Move Me

Isadora and One Point Perspective. by Sebastian Morales

One Point Perspective

Inspired by the work of Luis Barragán, Jesús Reyes Ferreira and Mathias Goeritz, I decided to do this one point perspective exercise based on the iconic Towers of Ciudad Satélite, in what today could be considered Mexico City. 

Drawing and montages:

 

Isadora HW 

Galvanic Response by Sebastian Morales

This is the second post on the series of Talking to the Elephant. in this case the first results of the Galvanic Skin Response sensor are shown. The axis are fairly arbitrary.

First test abandoned after user was asked an personal question. 

Pulling hairs out of leg, causing pain and spikes in the graph.

Discovered that heavy breathing, in particular exhaling, will cause peeks on the graph as well.