Skip to content

Commit

Permalink
Merge pull request #166 from sandboxnu/separate-cv-modules
Browse files Browse the repository at this point in the history
Separate computer vision modules
  • Loading branch information
maxpinheiro authored Nov 3, 2021
2 parents 73bc898 + 2b08d97 commit 1ae35ed
Show file tree
Hide file tree
Showing 25 changed files with 385 additions and 83 deletions.
10 changes: 10 additions & 0 deletions src/index.css
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,16 @@ input[type="range"]:focus {
width: 60vw;
}

.responsive-img-width {
width: 60vw;
}

@media (min-width:768px) {
.responsive-img-width {
width: 35vw;
}
}

.sobel-image-width {
width: 60vw;
}
Expand Down
23 changes: 19 additions & 4 deletions src/landingPage/ModuleIntro.tsx
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
import React from 'react';
import mod8 from '../media/modules/mod8.png';
import mod9 from '../media/modules/mod9.png';
import mod10 from '../media/modules/mod10.png';
import mod11 from '../media/modules/mod11.png';
import mod8 from '../media/modules/previews/mod8.png';
import mod9 from '../media/modules/previews/mod9.png';
import mod10 from '../media/modules/previews/mod10.png';
import mod11 from '../media/modules/previews/mod11.png';
import kernel from '../media/modules/previews/kernel.png';
import gausBlur from '../media/modules/previews/gausBlur.png';
import gabor from '../media/modules/previews/gabor.png';
import sobel from '../media/modules/previews/sobel.png';
import hog from '../media/modules/previews/hog.png';

function getCoverImage(imgName: string) {
switch (imgName) {
Expand All @@ -14,6 +19,16 @@ function getCoverImage(imgName: string) {
return mod10;
case 'mod11':
return mod11;
case 'kernel':
return kernel;
case 'gausBlur':
return gausBlur;
case 'gabor':
return gabor;
case 'sobel':
return sobel;
case 'hog':
return hog;
default:
return undefined;
}
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
66 changes: 63 additions & 3 deletions src/media/modules/module_descriptions.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,74 @@
"path": "computer-vision",
"active": true
},
{
"number": 8.1,
"title": "Intro to Images and Kernels",
"dropdownTitle": "Images and Kernels",
"body": "To begin, we discuss how an image is represented as data so it can be used in computer vision algorithms. We also introduce the kernel, and how it relates to the task of extracting features from images.",
"bgColor": "bg-darkblue",
"textColor": "text-white",
"margin": "ml-64",
"imgSrc": "kernel",
"path": "images-and-kernels",
"active": true
},
{
"number": 8.2,
"title": "Image Blurring: The Gaussian Blur",
"dropdownTitle": "Gaussian Blur",
"body": "Here we explore the Gaussian Blur, a popular computer vision technique that blurs an image. It uses a kernel with values obtained from the Gaussian function.",
"bgColor": "bg-offwhite",
"textColor": "text-darkblue",
"margin": "mr-64",
"imgSrc": "gausBlur",
"path": "gaussian-blur",
"active": true
},
{
"number": 8.3,
"title": "Directional Filtering: The Gabor Filter",
"dropdownTitle": "Gabor Filter",
"body": "This section explores another computer vision technique called the Gabor Filter, which extracts portions of an image that follow a specific directional pattern.",
"bgColor": "bg-lightblue",
"textColor": "text-darkblue",
"margin": "ml-64",
"imgSrc": "gabor",
"path": "gabor-filter",
"active": true
},
{
"number": 8.4,
"title": "Edge Detection: The Sobel Filter",
"dropdownTitle": "Sobel Filter",
"body": "In order to detect an object within an image, we search for edges in the image to give us a sense of the shape of the object. We use the Sobel Filter to detect edges in a particular direction in an image.",
"bgColor": "bg-darkblue",
"textColor": "text-white",
"margin": "mr-64",
"imgSrc": "sobel",
"path": "sobel-filter",
"active": true
},
{
"number": 8.4,
"title": "Histogram of (Oriented) Gradients",
"dropdownTitle": "Histogram of Gradients",
"body": "While a Sobel Filter can give us information about edges in a singular direction, we need to know about the edges in all directions simultaneously to fully understand the shape of the image. The Histogram of Gradients allows us to collect the edges in all different directions.",
"bgColor": "bg-lightblue",
"textColor": "text-darkblue",
"margin": "ml-64",
"imgSrc": "hog",
"path": "histogram-of-gradients",
"active": true
},
{
"number": 9,
"title": "Recognition as a Classification Problem",
"dropdownTitle": "Classification",
"body": "Once features have been detected and, perhaps combined into “higher-order features”, machine vision algorithms are often tasked to perform classification–that is, to determine what an array of features is “of”. Does this bundle of features in an image indicate the presence of a dog? Does that bundle of features over there indicate a cat? A set of features can be thought of as a location in a “state space”. Nearby locations in state space tend to correspond to similar objects or scenes. Features of many dogs of the same breed may form a “cluster” in state space. We will explore clusters of features in some simple state spaces and begin to ask how different “naturally occurring” clusters (corresponding to dogs and cats in our example) might be detected by a machine.",
"bgColor": "bg-darkblue",
"textColor": "text-white",
"margin": "ml-64",
"margin": "mr-64",
"imgSrc": "mod9",
"path": "classification",
"active": true
Expand All @@ -31,7 +91,7 @@
"body": "Today’s remarkable advances in the power of artificial neural networks leverage foundations laid in 1943 by the McCulloch and Pitts neuron model and Rosenblatt’s 1958 software implementation of the first perceptron. We will examine the architecture and functions of the simplest trainable pattern classifier.This classifier is so simple, in fact, that it cannot effectively separate clusters that are not “linearly separable”. Two clusters in a two-dimensional state space that cannot be separated by a single straight line are examples of non-linearly separable classes. What does the perceptron do when confronted with a data-set that is not linearly separable?",
"bgColor": "bg-offwhite",
"textColor": "text-darkblue",
"margin": "mr-64",
"margin": "ml-64",
"imgSrc": "mod10",
"path": "perceptrons",
"active": false
Expand All @@ -43,7 +103,7 @@
"body": "The problem of trying to separate clusters that are not linearly separable has been known for more than half a century, and many variations of multi-layer perceptrons continue to be developed to address and overcome limitations of earlier approaches. Multi-layered artificial neural networks are often said to have “hidden layers” where features are iteratively “weighted” in various ways until a desired classification performance is reached. We will consider a simple example of how hidden layers help to solve the “exclusive-OR problem” and explore the “credit assignment problem.” How does an algorithm know which weights to change and by how much, in order to promote learning for improved classification by a neural network?",
"bgColor": "bg-darkblue",
"textColor": "text-white",
"margin": "ml-64",
"margin": "mr-64",
"imgSrc": "mod11",
"path": "neural-nets",
"active": false
Expand Down
Binary file added src/media/modules/previews/gabor.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/media/modules/previews/gausBlur.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/media/modules/previews/hog.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/media/modules/previews/kernel.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
Binary file added src/media/modules/previews/sobel.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
49 changes: 49 additions & 0 deletions src/media/modules/text/computer-vision-intro.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
{
"title": "Intro to Computer Vision: Images and Kernels",
"sections": [
{
"title": "What is Computer Vision?",
"colorScheme": "dark",
"subsections": [
{
"title": "",
"body": "In order to extract valuable information from an image, computers must be able to process images and detect features in those images based on some criteria. As humans, we do this all the time - if we see an animal, we can identify it as a cat because it has fur, perked up ears, a small nose, and bright eyes. We extract features/traits of the objects in our view that allow us to identify those objects.",
"imgSrc": "blank"
},
{
"title": "",
"body": "Computer vision is conceptually not all that different! To identify features in an image, a small window called a *kernel* slides across the image and at every step, makes a calculation.",
"imgSrc": "blank"
},
{
"title": "",
"body": "Depending on what trait the kernel is supposed to capture, that calculation reveals whether that trait is present at the current location of the window or not. These calculations are then joined into a new image that represents the result of sliding that window across the original image.",
"imgSrc": "blank"
}
],
"demoComp": ""
},
{
"title": "What is a Kernel?",
"colorScheme": "light",
"subsections": [
{
"title": "Subsection 1",
"body": "Let's take a step back and define what an image is first. An image is made up of individual pixels which each have a value that represents the color, and those pixels are organized into a grid of values to give us the resulting image.",
"imgSrc": "animation1"
},
{
"title": "Subsection 2",
"body": "Kernels are not all the different from images in that sense - we specify a much smaller grid of numbers that, based on their value and organization are capable of extracting a \"feature\" from the part of the image the kernel is currently sitting on. A kernel might find the edges of an object in an image, sharpen the details of an image, or smooth the image out.",
"imgSrc": "animation2"
},
{
"title": "Subsection 3",
"body": "The way this feature extraction occurs is by making a 'window' the size of the kernel around a pixel in the original image. We then multiply the original image's pixel values by the numbers in the kernel, and sum them all up. We compute this sum for all pixels by sliding this 'window' across every pixel in the image and repeating the math at every step. As we slide across the image, we stitch together the new values after applying the kernel to create a brand new image!",
"imgSrc": "animation3"
}
],
"demoComp": ""
}
]
}
27 changes: 27 additions & 0 deletions src/media/modules/text/gabor-filter.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
{
"title": "Gabor Filter",
"sections": [
{
"title": "",
"colorScheme": "light",
"subsections": [
{
"title": "Subsection 1",
"body": "The Gabor Filter is a bit more complicated in its definition but the features that it aims to extract are just as intuitive as the Gaussian Blur. Instead of wanting to extract the overall structure of the image, now we are interested in extracting portions of an image that follow a specific directional pattern. The canonical example of this is wanting to extract stripes on a zebra that are in a particular direction.",
"imgSrc": "blank"
},
{
"title": "Subsection 2",
"body": "The gist of how the filter works is that we can detect certain orientations by modifying our angle θ and specifying the size of the window we slide across the image. We can also specify a frequency or magnitude for the directionality that assesses how strong the directionality is as a filter.",
"imgSrc": "blank"
},
{
"title": "Subsection 3",
"body": "The resulting image contains only the windows of the image that contain features that follow the angle specified by θ. This can be highly effective when scanning images which contain text where we only want to retain the text in the image. The reason for this is that images generally have lower directional content as they are smoother relative to text.",
"imgSrc": "blank"
}
],
"demoComp": "GaborDemo"
}
]
}
54 changes: 54 additions & 0 deletions src/media/modules/text/gaussian-blur.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
{
"title": "Gaussian Blur",
"sections": [
{
"title": "Gaussian Blur",
"colorScheme": "dark",
"subsections": [
{
"title": "Subsection 1",
"body": "As its name suggests, the Gaussian blur is used for blurring. This is useful to make sure that computer vision algorithms don't focus too much on the details of an image. The same way we don't recognize a cell phone by its serial number, an image processing system should focus on the big picture when trying to recognize objects.",
"imgSrc": "blank"
},
{
"title": "Subsection 2",
"body": "The definition of a Gaussian Blur Filter comes from the famous Gaussian/Normal/Bell curve as though it were projected into 3D. The values of the kernel are based on the density of (area under) the bell curve at the location of the current kernel entry. For example, the center of the kernel is the center of the bell curve where the density is the highest; therefore, the largest value of the kernel should be at the center. Conversely, values on the edges of the kernel should be relatively the smallest.",
"imgSrc": "blank"
},
{
"title": "Subsection 3",
"body": "We can control the change in the magnitude of the values from the center to the outsides of the kernel (spread of the bell curve) by modifying the standard deviation of the distribution using the σ (sigma) parameter. The last thing we want to make sure about the kernel is that the values sum up to approximately 1. If we didn't do this, then the image would end up brighter or darker than intended (called the image's \"energy\"). We can do this by summing up all of the values and dividing every entry by the sum.",
"imgSrc": "blank"
},
{
"title": "Subsection 3",
"body": "This kernel defocuses the middle pixel of the window by averaging its value with its neighbors. Done across the whole image, you retain the picture's overall structure without sharp details. This achieves our original goal of making a blurred image.",
"imgSrc": "blank"
}
],
"demoComp": "GaussianBlurDemo"
},
{
"title": "Difference of Gaussians",
"colorScheme": "light",
"subsections": [
{
"title": "Subsection 1",
"body": "The motivation behind the Difference of Gaussians technique is to detect edges in an image. This is particularly useful for segmenting objects from one another or creating a coloring book from pictures on your phone!",
"imgSrc": "blank"
},
{
"title": "Subsection 2",
"body": "A difference of Gaussians Kernel builds directly upon what we have already covered in Gaussian Blurring. We take two images processed using Gaussian Blur filters with different standard deviations (σ, sigma) and subtract them from each other. We want the image that we are subtracting from to have been processed with a Gaussian Blur filter that has a smaller standard deviation (less spread) than the image that we are subtracting.",
"imgSrc": "blank"
},
{
"title": "Subsection 3",
"body": "You can think of this technique as extracting what changes most when you blur. If you think about, the sharpest parts of the image, or, the edges of objects, undergo the most change when blurring. When finding the difference between two images at different sigma values, the sharpest parts of the image, or the edges, are going to be the only thing left.",
"imgSrc": "blank"
}
],
"demoComp": "DiffOfGaussian"
}
]
}
27 changes: 27 additions & 0 deletions src/media/modules/text/histogram-of-gradients.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
{
"title": "Histogram of Gradients",
"sections": [
{
"title": "Histogram of Gradients",
"colorScheme": "dark",
"subsections": [
{
"title": "Subsection 1",
"body": "",
"imgSrc": "blank"
},
{
"title": "Subsection 2",
"body": "",
"imgSrc": "blank"
},
{
"title": "Subsection 3",
"body": "",
"imgSrc": "blank"
}
],
"demoComp": ""
}
]
}
27 changes: 27 additions & 0 deletions src/media/modules/text/sobel-filter.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
{
"title": "Sobel Filter",
"sections": [
{
"title": "",
"colorScheme": "light",
"subsections": [
{
"title": "Subsection 1",
"body": "The Sobel Filter is popularly used in image detection algorithms to extract edges from an image. The filter uses a 3x3 kernel that detects the image gradient, a directional change in color in the image, for a given direction. The example kernel detects a change in color between the left and right sides of the current pixel, which indicates a vertical edge. ",
"imgSrc": "sobelKernelDark"
},
{
"title": "Subsection 2",
"body": "Note that the above kernel only detects a dark-to-light gradient (from left to right); to capture all vertical edges, we need to also use a \"mirror-image\" kernel that has positive values in the left column and negative values in the right (shown here).",
"imgSrc": "sobelKernelLight"
},
{
"title": "Subsection 3",
"body": "To get a more accurate understanding about the shape of the image, we need to <strong>extract the image gradient in the four primary edge directions: vertical, horizontal, diagonal up (45 degrees), and diagonal down (-45 degrees)</strong>. The demo below calculates the images resulting from filtering by the eight Sobel kernels (4 directions, 2 filters per direction for dark-to-light and light-to-dark) for a stop sign image. Note that you can select other images to filter from the box below.",
"imgSrc": "blank"
}
],
"demoComp": "SobelFilterDemo"
}
]
}
Loading

1 comment on commit 1ae35ed

@vercel
Copy link

@vercel vercel bot commented on 1ae35ed Nov 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.