Building an Environment Monitor for my Apartment – patsapartment.com

We moved to a new apartment this fall and I wanted to investigate ways to monitor air quality/humidity. Our new place has radiator heat that we don’t have much control over. As a result the air is incredibly dry and goes through intense temperature swings daily. The plants and my skin weren’t doing the best in the desert like environment. In short, the air temperate/humidity levels are unpredictable.

Because I’m crazy, I decided to assemble some hardware to detect various air quality metrics. Then I built a website to track this data: PatsApartment.com

I just bought a bunch of I2C sensors and I thought logging a bunch of different air quality metrics would be fun. One of the goals was to try and figure out if the radiators are set on some sort of timer/schedule or if they are responding to temperature changes in some sort of PID system. Additionally, I wanted to see what effect a humidifier might have on air quality and if my plants decrease CO2 levels throughout the day.

Here is is during construction

Monitor Setup

The main monitor was built using a Raspberry Pi. It had a bunch of sensors connected via I2C and an analog to digital converter. I setup a cron job to run a Python script that would read the sensor values for 10-15 seconds and calculate an average value from each sensor. The script would then send this data to an API I wrote using Laravel. The Laravel site shows the current sensor readings and a chart showing CO2 levels over time.

There is a tiny OLED display on the main board that gives you a readout of all the levels.

Basic diagram showing all system parts

Air Quality Metrics

I got a little carried away on Adafruit/eBay and ended up with a ton of different sensors. Right now the system can sense

  • CO2 Levels
  • Volatile Organic Compounds (plastics, combustable stuff, certain household cleaners, other nasty stuff)
  • Dust levels
  • Temperature
  • Humidity (this is a tricky one to get accurately for various reasons)
  • Relative light levels

Website/API

I built simple Laravel site that has an API endpoint for logging sensor values. There is a single page that loads a basic listing of current values and then a chart that shows CO2 levels and other data gathered.

Future Plans

Some future plans:

  • Move project off bread board to some perf-board
  • Add more charts to the website
  • Run some sensors outside for humidity/temp
  • Setup alerts when levels reach a certain threshold

Building an 8-bit Computer From Scratch: Part 1 of ?

It was really cold this winter in Chicago. For a few days, the wind chill was around -50F. Which is pretty crazy. To pass the time, I decided to start a new electronics hardware project. The past year or two I’ve mostly been doing hardware projects in my spare time because I never have to chance to any at work. This project is going to take a long time.

Inspiration for this project came from an amazing Youtube Channel by Ben Eater. One of the big series he has on his channel is a step by step guide to building a computer using logic integrated circuits. He goes through all the steps needed to build the CPU clock, registers, arithmetic logic unit, system bus, RAM, ROM logic for displaying numbers and loads of other neat stuff. Ben’s personal website has a full list of parts you can get online and circuit schematics for each of the modules.

Computer Design

The basic structure of the planned computer is pretty simple. As a result it isn’t able to do much. In the end, I want it to add/subtract 8 bit numbers. Use 2’s complement to work with negative integers. It will have about 5 or 6 assembly instructions and be able to run programs that are around 14 instructions long.

Saying it is an 8-bit computer is somewhat misleading because the memory address space will only be 4-6 bits depending on how things are built. The clock speed maxes out at a few hertz so you’re not going to be calculating too fast.

At this point, I have a pretty basic binary adder/subtractor built. It has 5 modules. It can added 2 8-bit numbers and that is about it.

  • System Clock
  • A Register
  • B Register
  • Instruction Register
  • Arithmetic Logic Unit

System Clock

The system clock is made up of a few 555 timer chips setup in their various operating states (astable, monostable, and bi-stable). They are set up to blink an LED at a rate that is adjustable with a variable pot. It runs in two modes. A simple latch system is built with some logic chips.

  • Auto-mode (clock signal runs over and over again at a set rate)
  • Manual mode (clock signal sent every time you click a button)

It runs at a few hertz. You can overclock it by turning the pot to adjust the timer circuit……

Auto/manual system clock built using 555 timer chips

Here’s the schematic from Ben’s website:

System clock circuit diagram

The Registers…

The registers are built using 2 4-bit flip-flops to hold values loaded onto them. There is a lock and clock signal fed in to store the values. You can read them by toggling a read/write pin. Theses are using SN74LS173 chips from Texas Instruments. One of many logic chips from TI that were used.

Each register holds an 8 bit value you can load onto it.

I built A and B registers to load up values to add/subtract. And then an instruction register to store instructions/memory addresses. The instruction register hasn’t been used yet because I don’t have anything to control CPU logic yet.

Register circuit diagram
This is one of the registers hooked up to the system clock. It can read/write values to the yellow LED’s

Arithmetic Logic Unit

The ALU can add/subtract two 8 bit numbers. It makes use of twos-complement to handle negative values. Normally a computers ALU would handle some bitwise operations but this one is only going to add/subtract.

ALU circuit diagram
ALU connected to the A and B registers

Conclusion

There is still a ton of work left to do to get this working decently. I need to work on the system bus, hooking up RAM/ROM and something to keep track of CPU instructions. Also memory management with will be kind of difficult. Planning out the next steps now. I wanted to try and do some stuff outside of Ben’s plans but am not sure what yet.

Here is everything hooked up in all its’ messy glory:

Clock/ALU/A and B register all hooked up writing to the bus. Not pictured is the instruction register

Finding Plaid Shirts with the Amazon Rekognition API

Is that flannel you are wearing?

I’m pretty sure I’ve been bitten by the machine learning bug! The past few weeks, I’ve had the opportunity to work with Amazon Rekognition. It’s a new fangled deep-learning image recognition API that is part of AWS.It’s been fun to play around with. You feed it images and it will send back attempts to detect objects, faces, text and other things you’d want to detect. No need to train your own model and run all sorts of specialized software. Just sign up for AWS, set up a client on your machine and start sending the API images to analyze. It’ll take you around 30 minutes to setup a simple proof of concept and get an idea of the API’s features.

What is Rekognition?

First, lets go over a little bit about what Rekognition is for those who aren’t familiar. Rekognition is an API for deep-learning based image and video analysis. You send it photos or video and it can identify objects, people, faces, scenes, text and other stuff.  Rekognition’s deep-learning algo will attempt to label objects in the image.

There are four types of labelling currently supported

I was blown away by how many objects it could label and the granularity of its’ classifications. My expectations of the API’s accuracy were low initially but I was quickly proven wrong. For instance, the API is able to distinguish different breeds of dogs. It knows there’s a difference between a dung beetle and a cockroach. It is also great at finding faces and labeling the parts of a face. Nose, eye, eyebrow and pupil location are just a few. There was a bit of uncertainty when trying to label emotions. For some reason, it always set my emotion as ‘confused’? As time goes by it will only get better at identification. One thing it never fails to label is flannel/plaid. If there is plaid is an image Rekognition will label it like there is no tomorrow.

It can also analyze streaming video for faces in real time. I haven’t tried video yet but at work we have an AWS DeepLens preordered. It has specialized hardware for deep learning and will be able to use custom detection models.

Let’s Start Tinkering

It is easy to start tinkering with Rekognition. We’ll use the AWS CLI and an S3 bucket to get started. We will upload images to the S3 bucket and pass them to the API via CLI. When the API is done processing the image it will return a string of JSON.

To begin, we will setup a simple environment to send images to the API.

  1. Setup the AWS CLI
  2. Create a S3 bucket for the images to be labeled
  3. Upload those images
  4. Use CLI to run Rekognition on bucket images

If you don’t have the CLI setup here are some 3rd party guides. The AWS docs aren’t known for their quality.

Next, you’ll need to create an S3 bucket with public read permissions. I used the GUI on the AWS console to make one in an availability zone close to me. In this case us-east-1. Take note of your bucket’s AZ because Rekognition needs it to find the right image. Once the bucket was ready, I uploaded a few images to bucket and made sure they were publicly accessible.

Look at that tasty pic

woman eating chicken wings with face covered in hot sauce

We will use cutting edge technology to analyze this image


# The base CLI command for Rekognition is: aws rekognition 
# To detect labels in the image use aws rekognition detect-labels 
# We need to specify an S3 bucket and the proper AZ 
# The bucket is described with escaped JSON 
# The AZ uses the abbreviations used across AWS 
# This page has all the AZ shortnames if you forgot 
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html 
# this will return JSON describing the images aws rekognition 

detect-labels --image "{\"S3Object\":{\"Bucket\":\"tinker-bucket\",\"Name\":\"wing-woman.jpg\"}}" --region us-east-1

Here’s a sample of the JSON that gets returned. Some of the results are pretty funny
"Labels": [
 {
 "Name": "Human",
 "Confidence": 99.27762603759766
 },
 {
 "Name": "Corn",
 "Confidence": 94.26850891113281
 },
 {
 "Name": "Flora",
 "Confidence": 94.26850891113281
 },
 {
 "Name": "Grain",
 "Confidence": 94.26850891113281
 }
}

Here’s a command to use face detection instead of object detection. Face detection mode returns an array with entries for each face in the image. It can detect up to 100 faces per image. For each face it returns an array of potential emotions too. It seems like the emotion detection is hit or miss at times but it is still really good.
# Note the --attributes ALL argument at the end
# Without this the array of emotions wouldn't be returned
aws rekognition detect-faces --image "{\"S3Object\":{\"Bucket\":\"secrete-bucket\",\"Name\":\"wing-woman.jpg\"}}" --region us-east-1 --attributes ALL

// the result has been shortened
{
 "FaceDetails": [
 {
 "BoundingBox": {
 "Width": 0.3016826808452606,
 "Height": 0.46822741627693176,
 "Left": 0.359375,
 "Top": 0.15793386101722717
 },
 "AgeRange": {
 "Low": 23,
 "High": 38
 },
 "Smile": {
 "Value": false,
 "Confidence": 74.72581481933594
 },
 "Eyeglasses": {
 "Value": false,
 "Confidence": 50.551666259765625
 },
 "Emotions": [
 {
 "Type": "HAPPY",
 "Confidence": 38.40011215209961
 },
 {
 "Type": "SAD",
 "Confidence": 3.1377792358398438
 },
 {
 "Type": "DISGUSTED",
 "Confidence": 1.5140950679779053
 }
 ],
 "Landmarks": [
 {
 "Type": "eyeLeft",
 "X": 0.4536619782447815,
 "Y": 0.3465670645236969
 },
 {
 "Type": "eyeRight",
 "X": 0.5664145946502686,
 "Y": 0.3220127522945404
 }
 ]
 }
 ]
}

It returned a bounding box for the person’s face, a potential age range, if they have glasses or not,  an array of potential emotions and coordinates for facial landmarks like the person’s eyes.

One thing to note is the coordinates for landmarks are formatted as decimals between 0.0 and 1.0. To get values in pixels multiply ‘X’ coordinates by the source images width or ‘Y’ coordinates by the height.

Pretty Neat

Rekognition is pretty impressive considering how simple it is to setup and start using. The CLI is easy to setup and start using although there are some parts of the API you can’t use. For instance, you are stuck uploading images to S3 whenever you want them processed. Using an SDK gives you more control over the API and will let you integrate it with applications you are writing seamlessly. I have been using the Python SDK Boto3 and have been very pleased. It has methods for pretty much any AWS product. At some point I’ll post about using it to alter S3 buckets.