_________________________________________________
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
Light Installation - A Metaphor
Since I had been reading a lot about gender bias in the online world I decided to narrow down my research to gender bias in automated advertising. I learned that even though media platforms let user choose their gender the inferred gender classification is almost always binary - male / female. I ask why do female users get different, less significant in regards to careers and self improvement ads and possibilities as opposed to male users (even when the advertisers are using gender-neutral algorithms).

The metaphor in simple words:
The light installation comes in as a metaphor - males getting higher rates of seeing ads for opportunistic products - assuring brighter futures - brighter light
As opposed to female users - having a harder time to climb career steps not only due to gender discrimination in the offline world but also online world hidden under biased algorithms - a less bright light.
Process:
Visual representation / prototype of simple interaction:
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
Human detection - experiment 1
Since I don't yet have a Kinect to test out human recognition, I decided to run a test with my webcam. This time instead of using detection of movement dependant on colour and reflections I used a different technique which lets me detect objects in a space as opposed to it being empty, see below:
Update:
I created a different 'interface' so now instead of a binary button male/female I can assign keyboard buttons to more than 2 modes of brightness.
For now I have 3 modes of brightness that represent proportions of gotten opportunities (via automated advertising) of 3 different gender categories male / female / non binary, accordingly with percentage of the brightness of the light – 100%, 30%, 10%, respectively.

Note: the percentage of the brightness does not come from factual research, I chose these percentages of brightness as they illustrate the difference best; it is a practical choice.
Kinect
Today I got a Kinect, however I am coming across some configuration problems, not being able to make it detectable by my laptop. Looking into where the problem may lay.

Update: Kinect Azure still is not recognized by my laptop and I feel like I have tried everything that could've made it not work. Not giving up yet but this is becoming a big and annoying issue.
For a plan B I created a 'tool' which is supposed to create some sort of person detection and give me the coordinates of the pixels. However, I created this using my webcam of the laptop, I work with RGBA values which change depending on the lighting of the environment which means:
1. I need to change the parameters depending on the venue, the furhter you are, the noisier it gets
2. The values are also influenced by the lighting of the environment, which also means I need to change the parameters depending of the lighting situation of the venue, but more annoyingly since I am creating a light installation that reacts to the values of this tool, a feedback loop happens and there may be a lot of flickering.
3. This way I am detecting colour and alpha values, not actual movements, meaning the background gets counted in too.

However, see the video of the experiment below:
(My phone only works with the front camera, so excuses for that)
if video(s) don't play on their own, right click>open video in new tab
For a better understanding, this is how I would have Kinect working (and then translating and multiplying the values of the x,y,z coordinates of the silhouettes and the rgba values of the colour I want the light to appear in.) See below:
Update: Kinect connected!
After a few days of troubleshooting and looking for the issue I solved my problem and Kinect successfully connected.

Here is how it is working at the moment. There is some delay but I have a feeling that is due to the fact that I have a million google chrome tabs open (I know, its a disgrace to the community really).

I am quite happy with the result regarding interactivity and ways of controlling the installation as it is right now.
Still not too happy with the fact that at the moment I cannot make the 'gender choice' more interactive, but that is in the plans for graduation project.

To work on this further I will see if I can play more with depth than just body tracking, for instance play with the density of the individual LEDs accordingly to the distance of the person detected (closer > denser, further > more scattered display of individual pixels).
Note: for now the depth parameters aren't functioning properly or how I want them to, so I will naturally have to look into that more.
Inspirations >>
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
__

W
E
E
K

1
__

W
E
E
K

2
Preparing the physical material
On Friday 21st of January I went to metal workshop and cut off a 4 meter long metal beam that I will use to attach the LED strip onto.

Ideally I would hang the beam from the ceiling and have the lights facing downwards. I am looking into ways of quickly and safely hanging the installation attached to construction beams to gain my desired result.
__

W
E
E
K

3
Booking the location and adjusting the setup in school
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I booked the interactive installation room for the final presentation on February 2nd.

I also bought a new board so that I don't have to switch and reload the code every time i switch from interaction station's wi-fi to home. Now I have two boards - one immediately works at home and the other - at school.
This Tuesday, 25th I went to school with all my hardware to set it up and test it out. I first soldered the board, loaded the code with interaction stations wifi connection and set up the right addresses in TouchDesigner. Then I connected everything and laid out the LED strip to test out how it works with maximum capacity of players recognized which I set to 3 at the moment. Being in a space that is bigger than 5 square meters helped me to test the spatial capacity of the Kinect depth camera which was a pleasant surprise - it has a very wide angle and can detect silhouettes properly up with at least approximately 2.5 meter in depth and 2 meters to each side, which works really well for the scale that I am working with at the moment. In addition, through testing I realized that the camera has the best viewpoint from around 1.5 meters height and has to be tilted +- 30 degrees.


I also tested out the idea of playing around with the density of pixels accordingly to the distance from the Kinect depth camera. It was a fun experiment and I learned more techniques to work with generative visuals (which I then projected onto the LED strip), however, I made a decision to not implement it into my design because it made it look way too tacky and sparkly, and only created confusion of what the installation is supposed to represent. Afterall, this proposal to myself was not connected to any part of my researched topic and was purely out of curiosity of what more I can do with the new hardware and freshly gained skills.
Kinect recognizing 3 players in the space
Loading the code that lets the board to receive signals via wi-fi real-time
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
Testing out gender detection
After this weeks feedback I was asked to run a test on gender classification system for my installation. It was a difficult task to deliver in less than a week, however I made some progress which I am proud of.

I tried to find things that would be somewhat similar and found a video explaining very well the code and the process of building a gender detection model and running it real-time.

It was going well until I needed to install specific dependencies for Machine Learning – I spent hours troubleshooting and trying every single 'solution' I could find on the internet, forums, reddit, github etc. but so far nothing, all I know is I am not the only one, as there were other people having the same errors and not getting answers.
Today I could not go to school due to health conditions but I will try tomorrow, perhaps someone will help me.

I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
Managed to work it out at school with some help of Interaction station coordinators.

For now the model is not included in the installation and does not send out any triggers that I want to in the future. However:
1. It was really important to me to see that this can work.
2. I now have an example script which I can later adjust for my own project, for instance it is more important for me to see silhouettes and gait, rather than faces as it works right now.
3. I learnt a basic way of how the script works + how to train the model and run it real-time
4. I still need to figure out how to include the environment I had created in Anaconda into TouchDesigner because so far I had come to too many problems to solve in such short time. // update: imported the environment by myself; but havent yet made the next connection. Still proud of myself.
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
__

W
E
E
K

3
Translating idea into design
I went to ask for help from a girl who works at Interaction station and knows well TouchDesigner, however she is only available on Wednesday, which is the presentation day.

In the meantime, I installed and prepared the installation in the designated room.

I also realized that in order to translate my idea into design to make it better understandable; to show the contrast effect that I want to effect people (contrast between each 'players' brightness == male=bright; female=less bright) I decided that best I can do at this moment is to at least create a difference between players that is not yet triggered by inferred gender (because I havent yet been able to implement the classification system into touchdesigner to trigger the brightness of male/female players). So: just different brightness for different players.

Because when I was done setting up at school I still had not figured out a way how to address each players coordinated with different rgb attributes and school was already closing, I had to do it at home. But I live kind of alone; so I came up with a creative solution of making a 'fake person' (their neck is a bit broken) and hoping Kinect will take it as a human silhouette. It kind of worked; and I was able to assign the desired contrast (at least it works while me and my fake human are sitting still, lol).
Have a look:
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I don't know if this will work in a bigger environment with actual humans walking around, but that I guess I will see tomorrow during presentation.



Also, see bellow what I tried before bellow:
Ig it is interesting what I did here == Taking each part of skeleton (for testing I only took 3( times tx; ty; tz coordinates assigned to individual channel), merging all those channels to get 'overall' coordinates, transforming the xyz coordinates into a texture operator and overriding the coordinates with a colour (I chose blue to better see a difference) and then transforming the texture operator back to rgb channels again, to merge the xyz and rgb channels to get a full graph with all the info needed to then feed it into dmx (lighting control protocol). This method worked very well for me in my previous projects when I started with geometry that was originally built in the software, so I had the concrete coordinates, but in this case, switching between so many different values, operators and channels was a bit too much, confusing, slowing down and did not work properly, because the coordinates are changing real-time, everything was glitching. Not even mentioning that this all was just for 3 parts of first players body.