Tracery, self generating HTML sites by Sebastian Morales

HW 4 is all about Tracery a tool that allows you to generate text using substitution rules. Depending on your patience and creativity, you can accomplish some pretty crazy stuff. 

This time we were challenged to create our own poetic form and rules. I decided to steer a little from the main assignment and explore the tool under another context.

I have notice that I am not a very good poet, and that my (computer generated poems) will get just a couple of views before disappearing to be never found again under other posts. To my surprise, there has been some very loyal readers, and no matter what I publish, they keep coming back. 

The following poems are dedicated to my loyal readership, bots:

The python script uses tracery to generate a different HTML page every time it is run. It will generate a top navigation menu, links, embed images, contact info as well as headers and text... the whole thing. 

On my server, using the npm package isBot I filter the requests by user agent, if it is a bot, I generate a new page and server it back, if it is human, I return what was asked for.

I have only tested my poem for 24 hours so far and I am amazed by the response. I feel so talented, I think I have finally found my audience. Some bots from Google are OBSESSED! They keep coming back and back and back! My server bill might be high this month bot I have to stay true to my fans! 

Over 4000 Google bot asking for poems

Over 4000 Google bot asking for poems

Wifi and how it works, lets get physical by Sebastian Morales

The following article was commissioned by Tom Igoe for his class Understanding Networks. It is an explanation on how wifi works on the physical layer. 

For quite a while I have been wondering about how wifi works, not so much in the context of protocols, although that is certainly part of it, but in the physical sense of information traveling through the air around us. By writing this article, I realized that I could not answer this question without going beyond wifi, so this article also goes into how radio works.

The article is structured in layers, this reflects the way I have been learning about the subject. It also means that you can stop reading at any point with a better (hopefully) understanding of how Wifi works. But... if you are still curious keep reading.

Before I begin however, let me share of the assumptions I have about you; I expect you to know a little about how you computer connects to the internet. Not necessarily about how things work, more in the realms of knowing that they exist. Things like: routers, local network, wifi cards, radio and electromagnetic waves. If some of this terms are completely unfamiliar, maybe it is time to do a quick wiki search. Ah, also some basic color theory will come in handy.

A little about Wifi or should I say, IEEE 802.11?

Did you know that Wifi actually stands for... nothing. Apparently, when the Wireless Ethernet Compatibility Alliance was looking for new branding they realized they "needed something that was a little catchier than IEEE 802.11b Direct Sequence" 1 and they came up with WIFI, not as an acronym but as a word. Now a days we rarely think about things like ethernet (or IEEE 802.3), although we use its wireless version every day. The "IEEE 802" part of the name refers to networking standards for local, metropolitan, and area networks, these can be using an open or an accredited process. The ".11" specifies local area networks, sometimes also called LAN.

Lets start with the simplest of scenarios, how does a wifi modem communicate with one device (talking physics)?

The answer to this question goes beyond Wifi into digital radio. Digital radio is very similar to and AM or FM radio with the difference of sending information as bits instead of analog signals.

Radio works by taking a wave (called a carrier wave) and modifying it with information. Lets look at the equation for a sine wave to determine what parameters can be modified:
This equation tell us that there are 3 parameters we can change:

  1. 'A'or amplitude,
  2. 'ω'or frequency of the waves
  3. 'phase' or the phase of the wave

With these 3 parameters we can make many different types of digital modulation but the most common are FSK, ASK and PSK. The following sketch illustrate the basic of how a message could be encoded using Frequency Shift Keying (FSK), Amplitude Shift Keying (ASK) and Phase Shift Keying(PSK):

The 3 types can be used but FSK is by far the most used one, it is in part because it is less sensitive to disturbances by other waves. When two or more waves pass through each other, their amplitudes at each given point will add up. This makes ASK specially susceptible to noise while FSK is still somewhat decodable. It is important to know that a lot of noise will bring down any signal no matter the encoding. Feel free to play with the following sketch to observe how different modulation systems behave under disturbances.

These sketches illustrates the most basic form of modulation. In practice more information is encoded using the same concept. For example lets suppose that instead of only using two frequency rates (0,1) we used four (00,01,10,11), then each bit of transmission could encode two bits. This makes the transmission 2x faster but also needs 2x the frequency bandwidth (more about frequency bandwidth later).

How does router talks to multiple devices on its network?

Now that we know that from a protocol standpoint, Wifi is just wireless ethernet, we can skip much about how the router address each device on it's network. This would be the same as for a wired network. Also, my fellow student Mithru explains it in his paper.

How do multiple devices/routers in different networks but same physical area not interfere with each other?

Have you ever been in a tall apartment building? If you have you probably saw dozens of networks available. It would be fair to assume that there are close to a hundred (or more) devices within wifi reach. Most likely, not all of this devices are talking at once, in fact, most devices rarely talk and constantly listen. Still, it most happen that once in a while two or more devices talk at the same time. How can we possibly communicate in such a loud environment?

This is a broad question and there are a couple of things that come into play, lets start with signal loss. Radio signals decay dramatically over distance. In fact signal strength is inversely proportional to the square of the distance. This rule applies to much more than signals in space but to anything growing over space and it is known as the Inverse-square law: isl

This explains why we don't see all networks in the world every time we try to connect to our Wifi. This also means that some of the networks we see in our apartment building have such a weak signal by the time they reach us that the interference noise is not as damaging to our signal.

It is time to talk about of the biggest players in radio transmission: Frequency. I know I have touched the term before but this time lets go a deeper.

What do cats on the internet and the light from far away starts have in common?
If you use wifi (or cell signals) and if your answer is that they both travel using the electromagnetic spectrum, then you got it right!

Light travels using the electromagnetic spectrum, so does wifi, and AM radio... and even x-rays. In fact the main difference between all of these is just the frequency.

The following image represents the electromagnetic spectrum. On one side we have gamma rays with super short wavelengths (very high frequencies), on the opposite we have the very long wavelengths (slow frequencies) of AM radios. Along the middle of the spectrum we have light.

electromagnetic spectrum(

Lets go deeper into the concept, lets focus just on what we call 'light' or to be more precise, the visible section of the electromagnetic spectrum. What is the difference between blue light and green light? Again, it is just the frequency of each wave.

Now to more practical concepts; you probably also know that if you shine blue light and red light you end up with a purplish/magenta color. In this case, both waves are interfering with each other and our eye ends up decoding magenta. But what happens if you use a filter to block all red light? Well, we are back to blue only.

The reason talking about light is useful is because we are familiar with it, the same rules apply to Wifi. We also have filters to remove unwanted frequencies form incoming messages.

Frequency Bands - Wifi 2.4GHz and 5GHz

In the early days of wifi, modems would only operate on the 2.4GHz range frequencies. By now we know that if Wifi mainly uses Frequency Shift Keying (FSK), the devices will transfer information by adjusting the frequency; Which means that they need a range of frequencies (not just one) to operate successfully.

The actual spectrum for 2.4GHz Wifi covers from 2.4GHz up to 2.5GHz, this range can be subdivided into three bands that are isolated form each other. Isolation is key to guarantee a clean, fast transmission.


From the diagram above, you can see there is actually some white space between the blue frequency bands. This is purposely left blank as a guard band, to prevent further interference.

As you can imagine, having only three frequency bands for Wifi devices to communicate makes the airspace crowded. On top, we have other devices that don't operate using the IEEE 802.11 protocol, e.g. microwave ovens which radiate noise in those frequencies to warm liquids in our food.

For this reason Wifi is now also available on the 5GHz frequency bands. This time, instead of only having 3 possible bands it has 14. This also allows for the possibility of reducing the number of bands and increasing the bandwidth of each. This means that more information can be transferred in the same space, making Wifi speeds much faster.


Once in a while it will happen, there is too much interference and the message cannot be decoded. In this case there is no other option but to ask for it again. Retransmission can cause nightmares to you Wifi connection, but next time your internet seems unbearably slow, just remember that cats and far away galaxies have much more in common than we usually think.

I hope this article was helpful in explaining the magic of one of our most used every day technology. One that you are most likely using as we speak. Any further questions? Feel free to include them below.

Sometimes the only thing you can do is failNOT YET RATED by Sebastian Morales

Sometimes the only thing you can do is failNOT YET RATED from Sebastian Morales on Vimeo.

[Automating video hw 2. Filing at OpenCV with python]
ometimes the only thing you can do is fail
Sometimes the only thing you can do is fail
Sometimes the only thing you can do is fail
Sometimes the only thing you can do is fail
Sometimes the only thing you can do is failSometimes the only thing you can do is failSometimes the only thing you can do is failSometimes the only thing you can do is failSometimes the only thing you can do is failSometimes the only thing you can do is fail

Additional Automating Video Videos

Hora de ver una peli, luces! by Sebastian Morales

Usando wireshark para monitorear trafico y apagar las luces del cuarto cada vez que visito Netflix. Tercera tarea par la clase de Understanding Networks en ITP, dirigida por Tom Igoe. 


En esta ocasión aprendimos un poco sobre herramientas de diagnostico como wireshark y hervibore. Ambas son herramientas que nos permiten observar paquetes y el flujo en nuestra red. 

Wire shark también tiene una interface desde terminal por lo que puede ser programado para interactuar con otros programas. En este caso, lo conecté de tal manera que cada vez que visito el sitio de netflix, las luces de mi cuarto se apagan. 


Pero no nos adelantemos...

Que es Wireshark?
Wireshark es un analizador de protocolos, no solamente internet aunque en este caso para eso lo vamos a usar. El programa es open source y también tiene una versión (tshark) para operar desde la terminal.

Se cierta forma Wireshark puede interpretar y desenvolver distintas capas del modelo OSI (modelo de interconexión de sistemas abiertos).

No tanto al grado de poder leer los 0 y 1 en los cables, sino empezando por la capa de Datos, nombrando las direcciones MAC y switches. Después podemos entrar en la capa de la red, ver las direcciones IP e información acerca de la información que vamos a pasar. Después viene la capa de transporte; puertos y protocolos, TCP, UDP, DNS?  Sigue el nivel de sesión, aquí podemos ver info sobre nuestra connexion con otra computadora (servidor). Por fin llegamos al nivel presentación, esta capa se encarga de que no importa como la información fue generada o transmitida, siga teniendo significado para quien la recibe (aquí ya estamos hablando del contenido de la información), ni siquiera estamos hablando de lenguajes de programación, más bien tipos de info (image, audio, ascii o Unicode?). Por fin llegamos al nivel de la aplicación, aquí nos enfocamos hacia como la información se relaciona con la aplicación que estamos usando, en este caso sería nuestro navegador y la información probablemente esta en un formato de javascript y html.

Ok, hora de aterrizar esto un poco. Vamos a ver un ejemplo de como usar wireshark. 
Wireshark te permite analizar tu conexión usando el modo promiscuo, lo que significa que te permite analizar todo el trafico que circula por tu red local. Como te puedes imaginar esto puede representar un gran riesgo para todos, por lo que ciertas redes estan configuradas de tal manera que no soportan este modo de promiscuidad. Las redes de NYU están configuradas de dicha forma, por lo que desde la escuela solo podremos observar nuestro propio tráfico. 

Wireshark te permite observar mucho tipos distintos de protocolos a la vez, a veces esto se puede volver demasiado por lo que también podemos usar filtros para solo ver ciertos tipos de paquetes. En la siguiente imagen tengo el siguiente filtro: "http.response.code" esto solo va a mostrar las interacciones que respondieron con http. Podría combinar filtros, por ej: "http.response.code && ip.dst ==" para ver solamente el trafico http dirigido a mi. En este caso, como las redes de NYU no permiten el modo promiscuo, se de entrada que el trafico de respuesta es solo para mi. Pero te puedes imaginar como esto puede ser útil si estas administrando multiples dispositivos.  

WIreshark http response sample

WIreshark http response sample

Antes de que me salte a lo siguiente quiero mencionar dos cosas:
1. En la parte inferior derecha, podemos ver que solo estamos observando el 0.0% de el trafico total, o 254 de 6,461,980 de los paquetes registrados, esto es por que muy poco trafico es una respuesta en http.
2. Hablando de http, aqui podemos ver por que no es la mejor idea. Si nos fijamos en la linea subrayada, donde dice "Line-based text data:" podemos leer el texto css tal cual. En este caso es solamente info del estilo, pero podría ser info más importante, incluyendo passwords y usuarios. Esto es lo que vería cualquier extraño analizando el trafico de la red (de no ser por que la red de NYU no esta configurada de esa manera). En otras palabras, usa https de ser posible.

Ok, ok, pero como se conecta esto con las luces?
El primer paso es deshacernos de wireshark version gráfica para poder fácilmente conectarlo con otros programas y tener un poco mas de control. Wireshark tiene una versión de terminal llamada tshark. La sintaxis en tshark es un poco distinta, en mi opinion es mas sencilla, por ejemplo: 

Screenshot 2017-10-16 22.24.41.png

"-i" quiere decir que vamos a escuchar en una interface, "en0" es la interface que queremos escuchar. ' -f" ' quiere decir que estamos apunto de nombrar los filtros de captura que queremos usar. Aquí podemos separar los filtros por lineas, en el caso de la derecha, ejecutando en siguiente comando va a listar todos los paquetes entre mi compu (172.16...) y


Screenshot 2017-10-16 22.30.40.png

Si quieres usar condicionales dobles puedes hacerlo nombrándolos en la misma linea, por ejemplo: "host or". Esto va a escuchar a todo trafico entre mi compu (172.16...) y fb, o entre mi compu y 


Ok, ok, pero como se conecta esto con las luces??
Hasta ahora sabemos como filtrar un poco el tráfico para solo ver lo que queremos. En realidad en el caso de las luces no estamos tan interesados en el contenido del trafico sino en el hecho de que existe cierto tráfico. En pocas palabras, si detectamos tráfico podemos inmediatamente detener el program y encender las luces. Para detener el programa podemos usar la bandera (flag??) "-c". Tenemos que acompañarla con el número de paquetes que queremos escuchar antes de terminar. En mi caso decidí usar "-c 10". 

Screenshot 2017-10-16 22.58.27.png

Ok, ok, pero como se conecta esto con las luces??
Antes de seguir, debería confesarte de que no voy a entrar mucho en detalle. Quizás uno de estos días escriba una guía mas a detalle.

Mientras tanto, mis luces están conectadas a unos switches "inteligentes". Normalmente, los switches se controlan con el control remoto. Pero con una arduino y un radio fácilmente se puede leer y replicar las señales del control. Si conectamos esto a una raspberry Pi zero, tenemos una casa inteligente! 

En la rpi0 hice un servidor local, si visitas ciertas URLs (por ejemplo: el arduino prende o apaga las luces. De esta forma tambien se puede conectar a Siri o a Google Home/okgoogle. 

Ok, ok, pero como se conectan las luces con wireshark?
Hasta ahora, tenemos un programa de wireshark que ejecuta desde la terminal, monitorea nuestro trafico y cuando visitamos cierta pagina web se cierra de manera automática. También tenemos un servidor que cuando recibe ciertas solicitudes prende o apaga las luces. 


Para conectar los dos procesos simplemente escribí un shell script. Si te fijas, también use el filtro de mac address, esto para prevenir que mi roommate apague mis luces cada vez que vaya a netflix. 


Me salté un par de pasos, sobre todo al final, pero creo que si me meto en los detalles esto ya no tendría mucho que ver con wireshark o con la tarea en general.

Y la verdad, tengo ganas de ver una película...

traceroute mapeando mi web by Sebastian Morales

Wait... what's with the spanish?

Esta es la segunda tarea para la clase de "Understanding Networks" en ITP. La tarea consiste en utilizar tracroute para entender como nuestros paquetes viajan en la red, como empiezan a aparecer nodos y caminos comunes.  

Que es traceroute (tracert en Windows)? 
Es un commando en la consola (terminal) que podemos usar como herramienta de diagnostico para observar como viajan nuestros paquetes desde nuestra computadora hasta la página web que queremos acceder. Por donde paran, cuanto tiempo toman y como saltan de router en router. 

Inspirado por la red que es el internet, quería representar las conexiones no tanto de manera geográfica pero de forma mas abstracta, a la vez, quería mostrarlas casi de forma orgánica. Como si estuviera analizando un organismo vivo bajo microscopio. Un organismo que no es estático y que se adapta y cambia con el tiempo.

Aclaración: Tom hizo que me diera cuenta de un par de cosas en las cuales no fui muy claro. A qué me refiero con querer mostrar las conexiones "casi de forma orgánica"? Me refiero a dos cosas, a que normalmente estamos acostumbrados (por lo menos yo) a pensar de cosas de forma espacial, coordenadas, posiciones geográficas, o marcas (pasando el soriana, dos cuadras a la izquierda). Pero el internet, a pesar de ser algo que existe en cables y computadoras, no siempre funciona de manera geográficamente eficiente, al menos no a simple vista. A veces vemos que nuestros pedidos (requests) viajan de Nueva York a Europa para regresar inmediatamente a Estados Unidos, dando saltos de servidor en servido que uno no puede explicar viendo un mapa. A veces, y aquí voy por mi segundo punto, vemos que nuestros pedidos viajan en cierta ruta, pero segundos después, el mismo pedido puede tomar una ruta completamente distinta. Cuando estaba pensando en como visualizar esto de manera gráfica quería que el sistema fuera flexible, que pudiera adaptarse y crecer, "casi de forma orgánica" para dar reflejo a la flexibilidad de el internet. 

Pero me estoy adelantando...

Archivo json de conexiones.

Primero escribí un programa usando Node.js para realizar los traceroutes y guardar la información recuperada en un JSON. Básicamente una lista declarando que ip esta conectada con que ip. 

Para que mis búsquedas tengan un poco mas sentido para el observador, la primera (la de mi compu), y la última (la de la pagina de interés) direcciones ip llevan nombre.

Una vez que logré guardar todas las conexiones en el archivo, me puse a trabajar un poco en como visualizar esto. Para esto decidí usar P5.js, una biblioteca de javascript muy fácil de usar sobre todo para crear visuales en la web.

Sin mucho esfuerzo pude crear este desastre:

Primera representación de conexiones.

Desastre por que no es nada fácil de leer y te deja hasta más perdido que si te pusieras a leer el json con las conexiones en lista.  

Si te pones a pensar en como organizar todas estas conexiones de manera automática (o inclusive manual) te darás cuenta de que no está tan fácil. Sobre todo cuando el sistema se vuelve más complejo. Sin embargo, este tipo de conexiones ocurren de manera natural, tanto en la naturaleza como en infraestructuras creadas por nosotros, lo que quiere decir que se tiene que poder programar de alguna manera.

Curioseando por la web, me topé con este sketch escrito por Tazal que tiene exactamente el estilo de lo que tenía en mente. Solo que a diferencia de la red del internet, no todos los nodos se conectan a sus vecinos, sino que existen nodos líder con muchas más conexiones.

Organic Blob por Tazal, Modificado para P5.js
Dale click para ver que pasa.

El código de Tazal tine tres reglas muy sencillas:

  1. Todos los nodos se mueven hacia el centro
  2. Si los nodos, entre si, están a menos de cierta distancia crean conexiones
  3. Si los nodos, entre si, están demasiado cerca se repelen

Estas tres reglas también las puedo usar yo, solo que la única diferencia es que las conexiones no se crean por cercanía sino por conexiones entre las direcciones ip. 

Una vez aplicada la lógica salió esto: 


En la siguiente imagen me conecté desde dos redes distintas a y a Ambas conexiones inician desde direcciones marcadas "pedregal". Se puede observar como google es mucho mas eficiente y llegar a sus servidores es mucho mas rápido.


Vamos a hacer una prueba by Sebastian Morales

So you are wondering why my blog posts are in Spanish? You came to the right place.

For the rest of the semester I have decided to do an exercise and write my blog posts in Spanish. In part it is because I want to share some of the knowledge I am absorbing with people back home. Also, I am learning all of this in english and if I don't force myself to learn the concepts and terms in Spanish, it is hard to later have those conversations without jumping back and forth across languages.

More important however, is the fact that a lot of what I am learning is already written in English and not so much in other languages. I have been thinking a lot recently if learning English should be a requirement in order to learn more advance computer concepts. Although today it might be the case here is an effort for the opposite.

Talking with fellow ITPer Sejo about this, he shared a story he read about Mariano Gomez, who been doing remarkable work in his rural community in Chiapas (south Mexico) connecting isolated communities to the internet. Awarded by the Internet Society as one of the 25 under 25 making a difference in their community he could not receive the award in person. The US embassy denied his visa based on systematic discrimination towards indigenous communities. His house not having a proper address with street names and numbers, his bank account not having enough funds, and his region being a strong source of undocumented immigration. 

I am not sure where I am going with all of this, I guess the story resonated with me because Mariano is fully bilingual (in Spanish and Tseltal, a Mayan language). 

If you are a professor grading me this semester, and have difficulties reading my posts I would like to talk to you. If you are a student or anyone else interested in my posts but can't understand them reach out to me and I'll explain them to you in english. 

Mientras tanto, pasa tanto!

Sockets y Guitarras by Sebastian Morales


Esta es la primera tarea para la clase de Understanding Networks en ITP-NYU. También es la primera vez que intento escribir este blog académico en Español. También cabe aclarar que aunque todavía no estoy oficialmente inscrito en la clase tengo fé de que alguien más de de baja la clase o de que el prof. Tom Igoe la agrande un lugar. 

La tarea consiste en diseñar y construir un aparato que se conecte a un servidor usando un socket TCP para jugar el juego. La idea también era de usar un microcontrolador tipo un arduino o una mini computadora (sistema embebido) como una raspberry pi. 

El juego es un muy simple, una vez que el jugador logra conectarse al servidor, aparece una barra con su direccion IP en la pantalla. La barra se puede mover arriba, abajo izquierda y derecha. 

El objetivo del juego es trabajar en equipo para que las pelotitas boten de barra en barra haciendo puntos. El video de la izquierda es una muestra.


Pensando en distintas maneras en como podría convencer a Tom de que dejara entrar a la clase, se me ocurrió llevarle serenata. 

Si alguna vez visitas ITP, probablemente vas a ver la famosa guitarra de ITP. No se bien la historia pero alguien la donó y pues ahí esta. Algo desafinada, algo maltratada, pero sigue sonando. Ha sido usada para muchos proyectos y noches de diversion. De izquierda a derecha: Justin Lange, Joe Mango (para Cici) y Tiri.

Experimentando con un simple multímetro y la guitarra, medí la resistencia a travez de la cuerda. Para ser honesto me sorprendió un poco la alta resistencia de las cuerdas de metal y como funcionan perfectamente como un potenciómetro lineal. 

Para poder controlar el juego con la guitarra, del lado donde se tensan las cuerdas conecté un cable a tierra del arduino. La püa (cubierta en tape de cobre) la conecté a 5V, y mis dedos los cubrí en tape de cobre los conecté al arduino para medir el voltaje. Así, al mover los dedos y tocar la cuerda podía medir distintos valores desde el arduino. 

Una vez conectado todo:

Para conectarme al servido desde mi compu simplemente usaba el instrucción en terminal:

$ cat /dev/cu.usbmodem1421 | nc 8080

esto básicamente significa algo así: agarra (cat) el contenido del serial port (/dev/cu.usb...) y escúpelo ( | ) a la esta ip/puerto ( 8080) usando netcat (nc). 

Para el proyecto final (de una semana), terminé usando un arduino mkr1000 y las bibliotecas <SPI.h> <WiFi101.h> para conectarme directo desde el arduino sin necesitar la compu.


Node + Selenium + ITP class search automation by Sebastian Morales

Initially motivated by my misfortune of not being able sign up to all the classes I wanted for my next semester at ITP I decided to create a script to constantly monitor the lists in case one of the classes I am interested opens.

For this purpose I am using node.js in combination with Selenium. 

It started with me navigating through the inspector window and analyzing the network traffic as checked classes. I noticed an interesting request that lead to the entire NYU classes database. Every NYU student can login into their account and access this, but the reason why this link is interesting is because it's open to anyone, meaning that I don't have to use my credentials to make the requests. 

The actual script is available on github (

At this point the script will run uninterruptedly on my local machine though a node server. Future steps is to have it run on my digitalocean server. 

The script will check the classes I am interested in every two minutes, if for some miracle one of the classes is open, it will automatically login into my account and enroll me. Then send me an email. 

So far it has worked for one of my classes! Two more to go :)

Once a class is detected as waitlisted or open, it will automatically login into my account and sign me up for the class. 

The following images represent the process, a failed attempt of registering me for live web. Failed attempt because the class is currently closed.

The actual script is running with phantomjs, which is sort of an invisible browser, meaning that it has no actual GUI and it runs in the background. 

Controlling 360 Environment Node.Js + + Three.js by Sebastian Morales

Our Sense Me Move Me final project is a multifaceted performance. For part of it we will be projecting a 360 environment on the walls, ceiling and floor of the room. Perhaps inspired by VR, maybe as a critic to it, or in an effort to make it more inclusive, we are going to use a single projector on wheels. As we turn or tilt it the projection will react to reveal the proper side of the virtual world. 

Using the sensors inside an iphone you can accurately identify the orientation of the phone. If only there was a way to send all these numbers live to my laptop... Interesting fact, a couple of years back laptops (mac book pros) used to have similar features to protect/lock the hard drive in case the computer found itself falling, as hard drives were replaced with SSDs, this feature faded away.

Connecting phone to laptop
Before I continue I wanted to thank Or Fleisher for his help me set up the server properly. 

Now that I look back at it, it all seems quite straight forward but at the time it seemed daunting.




The entire code is also available on github

Not sure about this but I'll likely use it as reference in the future. First started creating a npm package.json file, and importing all the packages needed. 


After setting up all the pages and the server, you can now seamlessly control the the view of a 360 world by tilting, and rotating your smartphone. 

Finally, since we were using a projector and wanted to give an effect of shining a flashlight into a world, we added a alpha image of a spotlight, this would hide the edges of the projector. 

The 360 image is actually a composite of two image quickly merged together to create a more dramatic and surreal environment.

NETMEDIA Final Proposal by Sebastian Morales


I want to use fb infrastructure to create a live video broadcast of a machine about to perform an action. Users/viewers can move the machine by "liking" the live video or "loving" it. A "love" moves the machine to the right, a "like" moves it to the left. There is a time countdown in the feed, when the feed reaches zero the machine executes the inevitable action. By liking/loving, the users can reposition the machine, that way they can prevent/ensure the machine from executing the action on a subject. As of what the action and the subjects are? I am not sure. 

Examples include:
   - Fish in fishbowl- a hammer.
   - Dollar bill destined for a fb watcher- a lighter/shredder
   - Wall constructing deconstructing robot.

Why facebook? 
   - The idea of someone being "more real" than an ip address/ not anonymous. People can visit your profile and get some information about you. 
   - The wider reach of audience through an established infrastructure.
   - The established presence and role fb has in today's society and its appropriation. 

Why Left and Right and not destroy or save?
Our actions may be simple, but their consequences are often complex, rarely black or white. At the end, that which happens is often a mixture of uncountable inputs,  a machine to which we all contribute without necessarily understanding how. By clicking left or right you are collaborating (efficiently or not) with a larger audience. The result will be your collective decision, even if it is not the majority's choice.

Thoughts about the internet
   - How it tends to polarize, how "as an online discussion grows longer, the probability of a comparison involving Hitler approaches 1"(Godwin's law). 
   - The chance for a meaningful interaction decreases, hatred grows and biases are strengthen. 
   - Then way the medium (internet) nourishes on its own blood to exist. 


Perhaps one of the biggest sources of inspiration for this project is Iraqi-American artistWafaa Bilabl. 

In 2008, in efforts to bring himself and the world closer to the war conflicts tearing apart Iraq Wafaa lived for 31 continues days in a gallery in Chicago. His loyal companion an internet controlled paintball gun. Anyone in the wold with an internet connection could move and aim the gun as well as pulling the trigger.


Other thoughts:
Lately I have been reading a lot about the way internet transforms our behavior.
 - We often think about things online as easily accessible, and the truth is that for the most part, the web is invisible to us. Yes, you can write a post and have it read by anyone in the world, but how often does that happens.
Perhaps play with money. Have a (100?) dollar bill and a random player form the crowd. The player has the chance to convince the people not to burn the note. Let the world decide. 

In about 10 minutes the fire turns on half the space, if the dollar bill is on that side it gets consumed by fire. Else the person gets the dollar bill through facebook messenger.  

This action will be repeated every hour on the hour for 10 hours.

Why one dollar? 
Because I can afford it. 
Because there is no difference between wining $1 or $10. Perhaps $100 starts making a difference. $10000 would be great but I can't afford to lose.

Perhaps there could be a system where people could pay to increase the pot.

Is the money being destroyed real?
Not sure, apparently it could be illegal. Do I really want to loose that money. Perhaps it make sense in the larger skim of things. 




Urban Tumors by Sebastian Morales

Urban Tumors is a hypothetical series of artworks emerging inside the decadent MTA infrastructure. The project was inspired by a couple of thoughts:

  • Decadence of current affairs
  • Vacuum as the seed for life.
  • Tumors as a self generated condition 
  • Maintenance as art 
  • Increased digital shadow

Download obj.

Vacuum as the seed for life

Yaxchilan Mexico








Tumors as self generated condition


Maintenance as art

Mierle Ukeles1

Mierle Ukeles1


Increased digital shadow




Decadence of MTA

Pictures 2 and 4 photo credit, Melissa Orozco

This a rendering of how the new wall might look once the tiles are maintained. 



Modeling Face and neck study free 3D print model-  ClayGuy
Face and neck study free 3D print model- ClayGuy

Using some quick pictures I took from the area, and using a metro card for sense of scale, I was then later able to model an approximation of the the actual tile missing.

I then removed the eyes section of the face and merged it with the brick model.


After considering milling I decided to 3D print instead. This way I could move a little faster. The CNC machine has been really busy lately. 

I was actually surprised on how well the scale turned out after my basic "scan".

The actual mold was a real failure so I ended up just using the 3D print instead. 

Matching color 

I never realized how difficult it could be to match a color without a sample. The only samples are form photos of the station that I had and although being underground the lighting is always artificially the same. My camera showed dramatic differences between shots. I ended painting a couple of wooden blocks and comparing them against the actual bricks. Thank you to Akmyrat for his good (color blinded!!!) eye and help matching colors.

Priming and painting 

Now you can go and do your own hypothetical Urban Tumor! If you actually wanted to install it you can go to Canal 6th station in NYC and find the perfect place for it. You can also modify it to replace bricks at home or to build an entire wall and divide a continent!

SMMM Kinect Alternative Skeletons by Sebastian Morales

Forward Kinematics

For the following exercise I wanted to experiment with idea of linking every joint of the body to another, in a linear fashion.   

Graphics by Ron Rockwell

The idea is perhaps inspired on concept of industrial robot arms where the position of the end effector is a combination of all the previous joints. The first joint having most effect on the final position and orientation.

Forward kinematics calculations consists of finding the end effector position and orientation by computing the joint parameters. We are most interested however in finding the joint parameters based on a desired end effector position and orientation. This is known as inverse kinematics (IK).


Another interesting video showing a similar concept of kinematics is the X125, in particular the series 2. 

Kinectron + p5js

The forward kinematics for this sketch are quite simple, based on Mimi's code, I simply did another function and passed all the joint values in my desired order (arms first, legs second, spine third and head last) to achieve the widest range of movement. 

To create the single line of joints I only had to simply push and pop the matrix once for the entire set of joints.

I realized that the order of joints is not as relevant in the program above, this because I am only translating position but not adding rotation depending on joint orientation. (Full code)

Just Not Sorry by Sebastian Morales

This is a quick analysis on the Just Not Sorry Gmail Plugin chrome extension. 

What does the extension do?

It matches words or sentences as you compose emails to a list of predefined insecure words. If the match exists, the words are undefined for the user to become aware of their language use.

questions for our Guest 

One of the developers for the extension will visit today in class, here a couple of questions I have for him:

  • What is the "_metadata" folder, I wouldn't let me load the extension. 
  • Google analytics? Trying to track how many users? What kind of info do you get from this. 
  • Storage? 
  • What is going on in the script loader? Why have it instead of naming them on the manifest?
  • update_url?? In the manifest

Example of JustNotSorry in action.

Getting 3D models from Google Earth by Sebastian Morales

Update: There have been some questions about apple maps vs google maps and which will return better results. I haven't yet tested apple maps but judging by the picture quality I am guessing that it will give better results (at least for the texture).

Google left vs Apple right

Google left vs Apple right

Original Post

Thinking about it, this method can be applied to a lot more than google earth models... In this particular case I just wanted to get the corner of a particular building in the city of New York. 

In the past I remember people using programs like 3D ripper that would try to capture the geometry directly from openGL, I actually tried it once but without any luck. The other problem with that approach is that you need a windows machine. 

In this method we will use a photogrammetry approach. 

Start by scouting your building.

Identify what you want to capture and what is irrelevant. The clear the idea here the better the chances of success. 

Start a Quicktime screen recording 

Try moving at a regular speed about the object of interest, I would say that you have about 1.5 minutes to capture all the geometry you want before you run into problems afterwards. Maybe you can push it up to 2 or 2.5 minutes. I really haven't pushed the method to it's limits.

Move around and make sure to get all the different angles you may need. 

A good tip here is to only capture the section of the window with no words, logs or icons, this will save you time later and increase your chances of success. 

This is also one of the reasons why I like using google earth better than google maps, you can turn all icons off. 

Here is the actual video recording I used if you want to get an idea.


Isolating Frames

The free version of Autodesk Remake (formerly Memento) will only allow you to upload up to 250 frames. Now, our screen recording is about 1.5 minutes long, at 60fps, it means we have about 5400 frames. Truth to be told, most of those frames are repeated since we were moving slowly compared to the screen recording. 

There are probably a couple of ways to do this but the one I am most familiar is using photoshop. First import the video frames as layers, limit to every 20 frames or so (5400/20=270), let it run and then export the layers as files. This last step might take some time but that is it.  


This is one of the easiest steps to follow, open Remake, select Create 3D from Photos. Select the images and Create Model. The defaults work fine.

You are almost done but this step actually takes a long time, hours. Go out on a date, have some nice dinner, and get back to work. 

Hopefully if everything went right, the moment you open remake you should be able to open your new 3D model. 

I hope it this was helpful! 

Old Memories I didn't know I had by Sebastian Morales

Going back into memories while revisiting old data which I didn't know I was sharing. 

You can download your own data  here

You can download your own data here

There is all kinds of data you can download, from you search history to your entire email, from your pictures to every single move you have done (location). Select the data you want to get and simply download it. This step might take several hours or even days (for me it took a little under 1 day to process).

Different types of data come in different formats, location for example comes as a JSON

Interestingly enough, the last entry for my location was back in 1398884229139, that is April 30, 2014 for those of you who don't keep track of time as ms after 1970. 

What happened then? Why did it stop logging/tracking? 

Let's take a look at the map.

It looks like I left home around noon and walked very slowly to what seems to be my girlfriends (at the time) dorm... Then radio silence.

I decided to check my email to see if there was some evidence of what could have happened.

Well there you have it. Got an iPhone and the tracking stopped logging. 

Before I jump into other things however, i wanted to share some days.


Like the day we traveled almost two hours just to get some good churros! Then somehow eneded up going twice to the same restaurant up in the north side. 

...or the day that I was trapped in the US (waiting for my OPT) but Christmas was still happening and all my Muslim and Hindu friends showed up, went to target, bought some frozen Pizzas, had a "family dinner" and ended up at one of the best Blues clubs in the city. 

Thinking about other interesting ideas that could be hidden inside of the massive amounts of data I decided to take a closer look at my email. In this case I was using immersion, a tool developed by the MIT Media labs to portrait your networks of emails.

Screenshot 2017-02-23 12.52.14.png

It looks at whom you are sending and receiving emails from. It actually looks at the From, To, Cc and Timestamp sections of every email. If a particular email was sent to multiple people, then connections start to form among those people, the more emails, the stronger the connections and the bigger their bubble. Take a look at the image above for my last year or so. If you feel like your bubble should be bigger, send me an email (or a hundred). 

For privacy reasons, the Immersion project won't look into the content of the emails, but DanO will. I decided to take a look at the email word count code he made available:

After logging in, it will start analyzing in batches, I am not exactly sure how this works exactly but here some results. Clicking next again will add another batch of words the already listed. The list keeps going for what feels forever but here are the first 50 words. 

Over 300 000 words are listed in what looks to be almost 2400 emails. Not sure about this but judging by the appearance of "www", more important "mailto" I am assuming this represent one appearance per email.

What does this all means?? No clue... but I am reading the secret life of pronouns... maybe something will be revealed. 



Screenshot 2017-02-25 22.25.04.png

I have been thinking about the 14 "the" I write in an email and really find it impossible to believe that my average email has those many "the"s. I am starting to believe that the "mailto" word only appears on replies and forwards, but not when you first send or receive an email. I believe this is true if the code only looks at the body of the email, then the body would only include this information if it had been recorded in the chain.

Midterm Ideas + Isadora HW2 by Sebastian Morales

Midterm Ideas!

For the midterm Roi Lev, Akmyrat Tuyliyev, Ari J Melenciano and me will be working together. For this first week we were tasked with coming up with 3 ideas for projects as well as locations to do them at. 

The ideas are quite challenging but very exciting, props to Akmyrat for coming up with two of them, and two Roi for thinking of the concept for the other. Left to right, for the first one we would install mirrors along the train platform, the mirrors would be pointing to cloud images in the ceiling. 

The second idea consists of projection mapping an elevator in the exact place where the elevator used to be before ITP became the entire 4th floor. Then we could project ITPers throught the history of the program. 

The third idea, by far the most challenging one, consists of remembrance for the catastrophe of the Triangle Shirtwaist Factory. The concept still needs some working due to the importance of the event. If we are going to do it, it needs to be done properly. 


For HW2 we had to create a simple patch with at least two scenes and one effect. 


The piece has actually 3 scenes, the first two can be observed in the following video.

Study of Pathways Post-mortem by Sebastian Morales

It is that time of the project that rarely ever comes. Time to be critical of what worked, what didn't, and what surprised us. All in the hopes that next time will be much better. 

What pathways did you see?
The pathways observed can probably be divided into two main categories. There was a lot of back and forth motion, a lot of linear movement. This was particularly true of David as he moved around the room. Jade however, tended to move more about the same area, orbiting around in what could be consider circles or eights.  

Which ones did you predict and design for? Which were surprises?
Thinking back, we predicted a lot more of circular motion. But more important, we predicted a lot more collaboration among the users. We expected physical contact between them, at the end, they didn't even touched once. We predicted a lot more of pushing and pulling, perhaps some rolling on the ground and a lot of expanding and contracting, both in a personal but also in a collaborative way. 


What design choices did you make to influence the pathways people would take?
It is hard to say if there was a decision that influenced more than the rest but there were a couple that had a lot of weight.  Moving the kinect from the ceiling to the wall in front of the performers had an immediate effect on how they would move. It literally shifted gravity, the range of possible movements. In retrospect, perhaps not really a conscient design choice, showing the performers on the screen in front of them really affected the way they moved.  They seemed to be more interested on how the technology was capturing the movement than the movement itself.  

Thinking about design choices it is relevant to talk about the code, even if it did not turn out as expected. The idea was to make a polygon by joining different body joints of the two performers. By showing previous polygons, the performers could see the history of their movement. This is important because it makes them aware of how their motion is not limited to space but extends through time. The visuals are a consequence of the movement but in turn these inform future possible motion.

What choices were not made? left to chance?
We only designed the interactions involved with one or two people, so the third person's joints would not be shown on the display. And the joints selected to form lines were only left shoulder, left wrist, left hip, left foot, since we thought people might move these joints a lot. However, when the users started, they waved the hands and walked around to discover the space, with little focus on the shapes they formed.

What did people feel interacting with your piece? How big was the difference between what you intended and what actually happened?-Jade

We intended to project the screen to the wall which faces to the users, but due to the equipment locations, we could only project it on the floor. In this way, they firstly expected to see some visuals shown  on the floor, but it seemed hard to understand the connections between user behavior and the projection because the visuals projected were reversed. We didn't expect people to pay attention to the floor, but instead, we hoped they could watch the visual changes on the two computers. It might have affect how long people may understand the interactions.

After we suggested them see the computers, people could soon get the idea. But one of our programs with floating curves can only catch one user's joints and thus couldn't show an enclosed shape, while the other one showed a changing hexagon. We also intended that people held their hands together, and touched each other's foot, but people tended to stay away with each other. And the shapes they formed became much wider.

Provide BEFORE and AFTER diagrams of your piece:

Performers on the floor, connected by foot-hand action

Performers on the ground, connected by hand-hand foot-foot actions


Performers detached, walking and moving in very independent ways.

Alternative motions considered:



Important Acknowledgments:
Professor Mimi Yin  
Tiriree Kananuruk for the documentation
Lisa Jamhoury for the development of Kinectron
Class of Sense Me Move Me

Galvanic Response by Sebastian Morales

This is the second post on the series of Talking to the Elephant. in this case the first results of the Galvanic Skin Response sensor are shown. The axis are fairly arbitrary.

First test abandoned after user was asked an personal question. 

Pulling hairs out of leg, causing pain and spikes in the graph.

Discovered that heavy breathing, in particular exhaling, will cause peeks on the graph as well.