Sunday, December 20, 2015

Weekly Progress Report #8

Weekly Progress Report #8
Adnan Khan and Noah Borel
12/20/15


Progress:
- Calculations, adjustments, and measurements made for compound eye simulation. The cardboard frame will have holes to allow an external webcam to be installed into frame to capture images at the different angles. **Real model is to be twice the scale of the model diagram drawings.**
   - Width = 10 inches.
   - Height = 5 inches.
   - Octagon top:
      - Width = 4 inches.
      - Side length  = 1.657 inches.
      - Longest length from center to edge of octagon = 2.165 inches.

   - Trapezoid sides:
      - Width when angled = 3 inches.
      - Height when angled = 5 inches.
      - Length = 5.83 inches.
      - Trapezoid extended from triangle with vertex at center of octagon:

      - Angle between octagon and angled trapezoid = 121°:


- Materials acquired for construction of frame for compound eye simulation:
   - Cardboard pieces
   - Scissors
   - Tape


Problems:
- The angle between the trapezoids is still uncertain. The trapezoids on the 2D plane would not be able to touch, as then the fold downward would make the trapezoids overlap.

   - A comparison was made between if the trapezoids were touching or if the trapezoids were thinner. The thinner trapezoid is what is needed, so that when folded downward, the trapezoid sides will meet.

   - Horizontal angle is sought for. This angle was unable to be found through comparison with the larger trapezoid.

   - Instead, angle is approximated by looking at boundary angles. On one end, if a rectangle is used, the horizontal angle is 90°, thus the vertical angle after folding is also 90°. On the other end, if the largest trapezoid is used, the horizontal angle is 112.5°, thus the vertical angle after folding is 180° because the fold would not be able to happen.
   - Approximation made with assumption that horizontal to vertical angle increase is linear. Based on rates of increase, every 1° increase of the horizontal angle means a 4° increase of the vertical angle: At 98°, the vertical angle is 122° which is a good approximation for the 121° desired angle.

   - Unsure if 98° is able to be used, as linear relationship between angles is uncertain.
- The length of the end of the trapezoid is still unknown. End length of trapezoid to be found once angle measurement is verified.


Plan:
- Begin physical construction of compound eye simulation frame:
   - Use VEX Robotics parts to give further strength to the cardboard pieces. Metal pieces are also bendable and can be fixed to be at certain angles.
- Further investigate issue with trapezoid angles and end length.


Sunday, December 13, 2015

Weekly Progress Report #7

Weekly Progress Report #7
Adnan Khan and Noah Borel
12/13/15


Progress:
- Optimized tracking ability of color-based tracking and frame difference tracking methods:
- Color-based tracking:
   - Increased erode and dilate function values. Made to work best when a small object was tracked, such as a small container lid.
- Frame difference tracking:
   - Decreased blur size value and increased sensitivity value. Made to work best when tracking a hand or finger when moving left and right or up and down.

- Brainstormed ideas for object tracking with a compound eye simulation:
   - Use computer webcam or external webcam.
   - Move webcam around axis to capture images in shape of semi-sphere.





- 9 total images captured for semi-sphere coverage.  
- Piece together captured images to form one single image.
- Blur image in Microsoft Visual Studio to represent how compound eye captures blurred images.

Problems:
- Some square image capture areas would overlap. Some areas of the semi-sphere would not be covered by the square image captures.

Plan:
- Continue development of compound eye simulation:
   - Learn to combine images in Microsoft Visual Studio.
   - Calculate required angles for images to be captured at.
- Integrate frame difference tracking method into new semi-sphere apparatus.  

Sunday, December 6, 2015

RE: Patent Search Results

An excellent list of patents. Some of the patents seem to be more on the imaging system and a few of them are more algorithmic. The number of the patents shown are relatively small compared to the traditional computer vision fields. The lacking of real world applications is a strong indication of the level of maturity of compound eyes technology. In other words, there are plenty of opportunities of innovation in this field! The reason that not many people involved in this field is probably due to the lack of available compound eyes imaging devices. However, it doesn't prevent you from creating a collection of simulated compound eyes images and test your new algorithms on them.

Monday, November 30, 2015

Patent Search Results

Adnan Khan and Noah Borel
11/30/15


Trends/Scopes: The field of using a compound eye has been deeply explored by various corporations. These groups tend to use similar models of a multi-aperture image capture apparatus but use different algorithms to differentiate themselves. Overall, the compound eye designs are with the purpose of tracking an object or multiple objects. In the field of object tracking patents, the trend is similar, as each group uses a different method for object tracking. Most claim that their own system is of great accuracy and effectiveness. 

Possible Areas of Innovation: Projects with the compound eye has not become greatly involved in real life applications. This comes into play with our own project, as we plan to involve the compound eye with our real life issue of tracking and taking down an unidentified drone. The next step is to learn from what is done with a working model and slowly bring it to reality. 

Sunday, November 29, 2015

Weekly Progress Report #6

Weekly Progress Report #6
Adnan Khan and Noah Borel
11/29/15


Progress:
- Completed code for object tracking based on sequential frame difference.
   - Capture two consecutive images.
   - Convert captured RGB (color) images to gray scale (black to white) images.
   - Use a sensitivity value to change clarity of gray scale image.
   - Use openCV absolute difference function to create a single threshold image. Threshold image is a black and white image with moving object between the original frames represented with white pixels.
   - Use a blur value to expand/contract the threshold image's white pixel area.
   - Add crosshair to mark object. Display x and y position values of crosshair point.
   - Touch-Ups: Optimized the sensitivity and blur size to track the object easily.



- Completed patent search. Additional blog post will be made for finds on the search.

Problems:
- For the object tracking code, problems arose with the object bounding rectangle. The bounding rectangle was not able to be used, so a crosshair at the center of the object was used instead.

Plan:
- Post the patent search results.
- The next step for the object tracking is to learn how to find a moving object's velocity based on captured images.
- Revisit compound eye based on finds of patent search.

Sunday, November 22, 2015

Weekly Progress Report #5

Weekly Progress Report #5
Adnan Khan and Noah Borel
11/22/15

Progress:
- Began work on code to object track based on sequential frame difference:
  -  Captures two images one after the other.
  -  Convert each frame from RGB (color image) to gray scale (black and white image) for absolute difference openCV function.
  -  Finds the absolute difference between the images: Purpose is to find what has changed between the frames. Threshold image is produced with only moving object frames represented by white pixels.

What I still have to do:
   -  Use blur function to make the threshold image clearer and more defined. Second threshold image produced.
   -  Middle of located object used for tracking. Bounding rectangle used around object's sides farthest from middle point.
   -  Display text and x and y position of object in frame.


Problems:
-  Parts of the code are confusing and hard to understand. It takes time to identify and research each line of code that is not understood. However, overall, the structure of the object tracking program is comprehended.

Plan:
-  Complete the object tracking with sequential frame difference program. This included the steps under "What I still have to do" in the progress section above.

Sunday, November 15, 2015

Weekly Progress Report #4

Weekly Progress Report #4
Adnan Khan and Noah Borel
11/15/15


Progress:
-   Windows Visual Studio 2010 Professional was successfully downloaded to school computer (Lenovo Computer #10).
-   OpenCV 2.4.9 was successfully downloaded to same school computer.
-   OpenCV libraries were successfully linked to a demo Visual Studio project.
-   First project created to track object based on color (Explained in more detail in Progress Report #1): Steps:
    -   Original RGB image converted to HSV image.
    -   HSV image converted to binary image.
    -   Threshold values isolate targeted object to track based on HSV values.
    -   Erode and Dilate functions clarify the targeted object.
    -   OpenCV findContours function marks the targeted object and gives coordinate values.


Problems:
-   Learning how to download and set up the Windows Visual Studio took much more time than what was expected. Otherwise, everything else was completed smoothly.

Plan:
-   OpenCV: The color detection system was mainly just to make sure that the OpenCV was running correctly and that the platforms were set for future work. The next step is to object track without color by comparing two consecutive frames and searching for a difference between the images.
-   Compound Eye: Revisit the concept of the compound eye. Begin to brainstorm on ways to be able to process the curved image or multiple images.

Monday, November 9, 2015

Weekly Progress Report #3

Weekly Progress Report #3
Adnan Khan and Noah Borel
11/8/15

Progress:
- Tried to download Windows Visual Studio on computer from school.

Problems:
- There was trouble downloading Windows Visual Studio to the school computers. The versions that were downloaded were the most updated version and the one from 2010.
- The most updated version may not have worked as the computer was with an old OS.

Plan:
- Retry downloading Windows Visual Studio on computer from school.
- Download Windows Studio on home PC. Home PC is with a more updated OS so the modern Windows Visual Studio version may work on it.

Thursday, November 5, 2015

Amcrest ProHD

Amcrest
- A highly rated security manufacturer.

Amcrest ProHD
- 1080p video at 30fps
- Relatively wide viewing angle of 90 degrees.
- 12 built in infrared LEDs for night vision use up to 32 feet
- Setup using iOS or Android device using Amcrest View Lite app
- Can be panned or tilted remotely using phone or computer
- Video can be streamed directly to computer, phone, tablet or NVR
- Built in motion alert system
- WiFi connection required for most features to work
- Requires power supply (no battery)






Sunday, November 1, 2015

Weekly Progress Report #2

Weekly Progress Report #2
Adnan Khan and Noah Borel
11/1/15

Progress:
-  Research into OpenCV with Microsoft Visual Studio software or Python software.

OpenCV with Microsoft Visual Studio:
Installation and Creation Process:
-  https://www.youtube.com/watch?v=rXkrQeqZd8A
-  http://docs.opencv.org/2.4/doc/tutorials/introduction/windows_install/windows_install.html

-  https://www.youtube.com/watch?v=POpMQPM9YlY
-  https://www.youtube.com/watch?v=X6rPdRZzgjg
-  https://www.youtube.com/watch?v=bSeFrPrqZ2A

OpenCV with Python:
Installation and Creation Process:
-  https://www.youtube.com/playlist?list=PLEmljcs2yU0wHqeLlrytfuiqyNKTKIlOq
-  https://wiki.python.org/moin/BeginnersGuide
-  http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html#face-detection


Problem:
-  Python seems to unfamiliar to use. It would need time to understand it better.
-  Visual Studio, even though having a complicated user interface, has more resources for instruction.

Plan:
-  Use Windows laptop from school and begin hands-on visual studio work.
-  Download both Visual Studio and OpenCV, and have the OpenCV libraries accessible to the software.
-  Test object tracking using color method.

Sunday, October 25, 2015

Weekly Progress Report #1

Weekly Progress Report #1
Adnan Khan and Noah Borel
10/25/15


Progress:
1. Overall System Diagram and Plan Created:
 -   Both the Vision System team and the Interception System team have created a system diagram of the major aspects of the entire drone defense system. An order of events has been made according to the system diagram and tasks have been assigned. 
 -   We will be using a normal webcam for the beginning to just get familiar with how the drone detection will work. Then, after we can track with a webcam, we will look into the Amcrest cameras.


2. OpenCV:
 -   Research has been done into downloading OpenCV and connecting it with a development software.
Commonly used development software: Microsoft Visual Studio, Python
Links: 1. https://www.visualstudio.com/en-us/visual-studio-homepage-vs.aspx
           2. https://www.python.org

3. Drone Detection:
 -   Research has been made into what will be done once the development software is up and running and how the drone will be tracked.
 -   RGB (red, green, blue) colored image that is taken is converted into HSV (hue, saturation, value) to get more distinction in the resulting image.
 -   HSV image is altered with thresholds to isolate the drone. If drone is a certain distinct HSV color, then thresholds can help to isolate it so that the range of HSV of the drone is only only seen. Lastly, this image is converted to a binary image of white and black where the thresholds help to only reveal the drone.
Links: 1.  https://www.youtube.com/watch?v=bSeFrPrqZ2A
           2.  http://computer-vision-tutorials.blogspot.com/2014/03/opencv-tutorials-color-detection-object.html

















4. Compound Eye:
 -   Found book on Compound Eyes: "The Physiology of the Compound Eyes of Insects and Crustaceans" By Sigmund Exner. Even though the original book was written in 1891, it includes a deep study of the compound eye. The text has much more explanation than in a dissertation from a college student.

Problems:
 -   OpenCV: There are many options/paths that could be taken with which development software could be used. Some are able to use Mac OSX while others cannot. The options must be carefully looked through.

 -   Compound Eye: There is a lot of complicated math involved in the analysis of the compound eye. There are many variables and formulas that relate to the optics of the eye.

Plan:
 -   OpenCV: Do a study of the various development softwares and which one would be best considering which computers we will be using. Learn the process that must go into each software to link OpenCV.

 -   Drone Detection: Once the OpenCV step is complete, the coding for the drone detection can be implemented.

 -   Compound Eye: With any math or diagrams that seem to complicated, I will consult with Mr. Lin to better understand them.


Tuesday, October 20, 2015

Camera Overviews

THETA 

Products: Ricoh Theta, Ricoh Theta m15, Ricoh Theta S

For all downloadable software is needed to view the videos or images recorded on a Mac or PC.

Ricoh Theta:
- Takes only still images.
- Can take 360° images with one shot.
- Can be controlled wirelessly with a smartphone.
- Images can't be viewed live, need to be separately uploaded onto a PC, Mac or smartphone.
- Can't be used when charging.

Ricoh Theta m15:
- Can take videos as well as image.
- Videos and images needed to be transferred to PC, Mac or a smartphone to be viewed.
- Can take 360° images and videos.
- Can't be used when charging.
- Can be controlled by a smartphone wirelessly.
- Can only shoot video for 5 minutes

Ricoh Theta S:
- Higher quality video and images (1080p videos at 30fps).
- Can take 360° images and videos.
- Can take live video at 10fps and broadcast it to smartphone.
- Can shoot videos for up to 25 minutes.
- Can't shoot videos while charging.
- Can be controlled by a smartphone.




Sunday, October 18, 2015

Ideas for Eye Structure

Ideas for Eye Structure
10/18/15
Adnan Khan


1. Single aperture camera mounted facing upward. Custom made glass attachments made to extend field of view and bend incoming light to the single lens. Each glass piece is angled precisely to have the image from different angles be mirrored into the lens. The received images would be in quadrants or in divided sections with each section found in a different square in the array.
2. Mounted omnidirectional camera. The camera revolves around and captures images of the full semi-sphere field of view piece by piece. The drone could have a unique color that would be detected. The camera would be in a certain order of image capturing, but when the unique color is detected the camera will stop.
3. A single video-recording camera. This camera could be mounted on a device that could move it in a path to have it record all in the semi-sphere field of view. The camera movement will not stop when the drone is detected, but the point on the camera's path where the drone is detected will be recorded.
4. Stationary Ricoh style camera with two opposite facing cameras. Camera could be rotated so that the wide field of view cameras are able to see where there would be blind spots.
5. Three angled single aperture cameras that face upward and outward. The three single aperture cameras act as three ommatidia with image capturing of only that one field of view. The camera would be stationary.

Wednesday, October 14, 2015

Object Detection Research - Noah Borel

  1. A Survey on Approaches of Object Detection (2013),  S. Shantaiya, K. Verma, and K. Mehta.

Overview:
- Object detection using video has progressed rapidly in recent years.
- In most cases the focus has been on human motion and behavior.

Introduction:
- Video detection is starting to be used in detection of pedestrians, monitoring car traffic and identifying strange behavior near ATMs.
- Video detection has a huge interest because of it's potential uses in security.

Object Detection Approaches:
- High quality but inexpensive video cameras as well as high-powered computers are used in object detection.
- The detection of moving objects in video is a key step in overall object detection.
Feature Based Object Detection:
- In this type of object detection the object is usually detected using shape, size or color.
Shape:
- Shape based object detection is very complex due to having to segment the object from the rest of the picture.
- It becomes increasingly harder when there are multiple objects in the picture with different shadings and light levels.
- A method for detection of athletes in game is the PCA-HOG. This involves transforming the athletes into Histograms of Oriented Gradient and then applying Principal Component Analysis to them.
- A limitation of this particular method is that it is difficult to detect multiple objects. 
- Another method is by using variable resolution double-level contours. This involves using a low resolution image to detect the object's edges and get the outline of the object. Then the high resolution image is used to in a process where the outline reduces until the actual object outline is found.
- One other method for shape based detection is to automatically differentiate between the foreground and background of the image and to detect the object based on the assumption that it has different geometrical features than the background.
Color:
- Color detection requires low computational power relative to other methods.
- A method for object detection using color involves the analyzing of RGB color images. A downside to this is that detection fails if the object is too small.
- Another color based detection involves using color histograms. This follows the idea of extracting HOG features with pixel colors inside being analyzed. A drawback of this method is that it can't work if the background is a similar color to the object.
Template Based Detection
- This requires a template of the object being detected to be available.
- Detection is done by matching the features of the template to the image being observed.
- The two types of template detection are fixed and deformable template matching.
- Fixed template matching is very useful if the object's shape does not vary with the change in viewing angles.
- Deformable template matching is more suitable in most cases of detection because objects tend to not have the same shape from different angles of view.
Motion Based Detection
- This method relies on changes over pixel or block levels. 
- The first stationary frame is used as reference background frame. The preceding frames with the object in them are then compared to the reference frame and detect the object. This method requires the object to be moving continuously and not be in the image before being detected.

Saturday, September 26, 2015

RE: GanttProject Plan: 9/24/15 - 11/8/15

Feedback:

  1. The activities listed in your Gantt Chart mainly reflect an organized reading plan of the technical info in your Project Resource page rather than a series of tasks or steps to solve your problem. Though solving your problem involves lots of reading and studying, they should be organized around your problem solving steps instead of being a list of independent activities. Imaging following the current Gantt Chart, at the end of this period, you might be able to run 3-4 image processing tasks (they may or may not related to your project directly) using OpenCV functions by following the book/tutorials using a language/platform that may or may not be your final language/platform. You will gain some general knowledge of computer vision and object tracking, and they are definitely helpful, but you can only know a little of everything and without any focus/depth. The main problem is that, through the whole plan, you have not even start solving your problem yet. Taking a STEM research course is totally different from taking a course such as Calculus. You learn calculus by learning it chapter by chapter (since your focus is "Calculus"), without really care about any real world problems. (Even though all the math textbook will emphasize/claim the real-world applications, most of the examples are either fake of non-realistic.) Learn research should focus on the "research problem". Every learning activity should attach to a problem solving step, and lead to a measurable result. I hope you can start seeing the fundamental difference between them.
  2. With that said, you might notice that your Gannt Chart needs to have some important activities, for example, determining the compound eyes mechanism,  designing system architecture, developing algorithm to detect the drone(s) in an ommatidia image, developing algorithm to track the drone(s) in compound eyes, calculating the location and speed of the drone, etc.. Under each main activity, there will be lots of focused reading, studying, designing, implementation, testing, documentation, etc..

Thursday, September 24, 2015

GanttProject Plan: 9/24/15 - 11/8/15

Below is a link to the google drive file containing the GanttProject with our plans for
9/24/15 to 11/8/15:

GanttProject Schedule 1


Wednesday, September 23, 2015

Presentation (9/17/15)

Access presentation via the link below:

Presentation on September 17th, 2015

A project example combining biology and robotics

Just for fun. It's a Google Science Fair 2014 winner, a 15 years old high school student. Combining the behaviors of a fruit fly with his robot and drone.

Tuesday, September 22, 2015

RE: Camera Ideas (9/20/15)

Good search, and it shows you various possible ways/aspects of your project. You should also update the Project Resource page with those information! There are a few concerns/questions before you move ahead and order the cameras.
  1. Have you explore the possible optical solutions for the compound eyes? 
  2. Compound eyes images are supposed to be at much lower resolution. Can we get cheap low-resolution cameras connected to the same computer?
  3. What is the difference of FOV between omnidirectional (360 degree Spherical Panoramic camera, Ricoh) and "compound eyes" camera (emulated by Amcrest)?
  4. What are the algorithms people have used to detect and track moving object(s) using omnidirectional camera?
  5. How do you integrate the videos from regular camera and infrared camera?
  6. What are the overall system architecture (including hardware, software, communication)? How will the cameras connected together and process the real-time videos?
  7. Does the mobile phone have enough computational power to process the video in real-time?
  8. What will the final system look like? Can the work done on the current/temporary system be transferred smoothly to the final system?
  9. Before ordering any camera, a thorough survey and comparison on the similar types of cameras should be conducted. 
There are some links in the Project Resource page addressing some of the aforementioned problems. You might want to take a look and expand the broadness and depth of your research.

Sunday, September 20, 2015

Camera Ideas (9/20/15)

Camera Ideas
9/20/15
Adnan Khan


   There are no available compound eye cameras on the market. Because of this, the initial idea would be to have several pictures taken to form one complete image or a 360° panoramic picture taken. This will be done with pictures form a single aperture camera.

On the market:


  • Cameras for Compound Eye Imitation: 

   -   Amcrest ProHD 1080P WiFi Security Monitoring System:
       -   Wireless pan/tilt/zoom ("Intelligent Digital Zoom").
       -   Mounted camera with 90° field of view.
       -   Dimensions: 5.0 x 4.0 x 4.0 inches, .5 lb.
       -   32ft Night vision range using 12 IR LEDs.
       -   Relatively cheap: $120.
Link:
http://www.amazon.com/gp/product/B0145OQXCK?gwSec=1&redirect=true&ref_=s9_simh_gw_p421_d0_i1

   -   Ricoh Theta M15 360° Spherical Spherical Panoramic Camera:
       -   360° spherical panorama picture capture.
       -   Two lenses: One on each side of the device.
       -   Relatively expensive: $300.
       -   Dimensions: 0.9 x 1.6 x 5 inches, .25 lb.
       -   Wireless connection to smartphone devices.
Link
http://www.amazon.com/Ricoh-Theta-Degree-Spherical-Panorama/dp/B00OZCM70K/ref=sr_1_1?s=photo&ie=UTF8&qid=1442768015&sr=1-1&keywords=360+degree+camera


  • Cameras for Infrared/Thermal Detection:

   -   Seek Thermal Imaging Camera for Android/iOS Devices:
        -   Attachable to devices: Android USB and iOS Lightning ports.
        -   Small/compact: 1.6 x 0.8 x 0.6 inches, 1 lb.
        -   Relatively expensive: $250.
        -   Very Sensitive: Heat detection from -40°F to 626°F.
        -   36° field of view.
Link:
http://www.amazon.com/Seek-UW-AAA-Thermal-Imaging-Connector/dp/B00NYWAHHM/ref=sr_1_5?ie=UTF8&qid=1442768741&sr=8-5&keywords=infrared+camera


RE: Initial Planning & Coordination

Initial Project Planning/Coordination
Adnan Khan and Noah Borel

   Over the course of the next year, we will be working to create an anti-drone defense system. We, Team 2, will be working on the vision segment of the system while Team 3 will be working on the physical intercepting segment of the system. As we work on these parts of the project, the final product will be able to efficiently spot a flying drone and incapacitate it safely without damaging it or causing a hazard to any nearby people (or buildings).
   In the real world, as drone become increasingly used and put in the public eye, the market for an anti-drone system is very high. With many cases against drone fliers over privacy and trespassing concerns, an anti-drone system that does not damage the drone will be ideal. Usable by the government, private companies, and even for private housing, this product has great potential.
  • The original goal does not require "not damaging the drone" (it's good to have, but not required), but not damaging the buildings or hurting people in the intercepting process. This goal has implications to the material and speed of the projectile.
   Beyond its value as a product, this anti-drone defense system will advance the field of STEM in a great way. For the final system to work, STEM boundaries will be surpassed, as the defense system will be able to track the drone's unpredictable movements and be able to take it down safely and efficiently.

Vision System: Team 2: Adnan Khan, Noah Borel.
Intercepting System: Team 3: Eduardo Guzman, Henry Hoare.

   Groups will stay in touch via email and through mobile phone. At first, the two groups will develop each system individually. Once the systems become developed, the two groups will need to converge to integrate the two projects.
  • Between team 3 and your team, you should define the hardware/software interface between your two systems clearly early on. Otherwise, it won't be possible to integrate two systems together at the end! It includes things such as 
    • architecture: are both systems controlled by a single microcontroller, or both of them have their own microcontrollers and communicate to each other through some communication channel (and what's the physics communication channel?).
    • What are the data (e.g., distance, 3D angles), clock, and control signals between the two system?
    • What are the timing requirements (e.g., target location data needs to be updated every 1ms) for the communication?  

To create the vision system, we will need full understanding of several topics:
-   Physics: Optics and the mechanism of the eye. (especially the optics of compound eyes and how to construct them.)
-   Biology: Types of eyes found in nature and how they work.
-   OpenCV: Understanding how to operate the program.
  • You will need basic knowledge of image processing, pattern recognition, computer vision, motion tracking in the compound eyes context. The OpenCV probably cannot cover the compound eyes function. You need to develop algorithms to handle it.

   Currently, we have basic knowledge of these topics. With this basic understanding, we have various ideas of what type of "eyes" we should use for the vision of the defense system. Yet, we don't know what we are capable of until we have learned all that we can. Our learning process has just started and we plan to absorb a lot before we begin diving right in.
   As we expand our knowledge, we will know more materials that we will need. As of now, we need a Linux computer to run the OpenCV and later we will need a drone prop for our vision tests.
  • You need to focus on the compound eyes mechanism as soon as possible since it may imply the needs of buying camera(s) and optical lenses, and the platform requirement to run the algorithms. 
  • If you consider to involve infrared vision, we also need to buy the infrared camera.
   For the near future, we will review the summer work that we have done. Then, we will devise a strategy to thoroughly assess the different methods of achieving the vision. Furthermore, with Mr. Lin, we shall discuss what is capable based on the materials and resources that are given. Finally, we will continue to research how to create the "eyes" and we will prepare our presentation for Thursday.
  • Your research should not limited to "the materials and resources that are given" since we will purchase equipment/materials as needed (if affordable).

Sunday, September 13, 2015

Initial Project Planning/Coordination

Initial Project Planning/Coordination
Adnan Khan and Noah Borel


   Over the course of the next year, we will be working to create an anti-drone defense system. We, Team 2, will be working on the vision segment of the system while Team 3 will be working on the physical intercepting segment of the system. As we work on these parts of the project, the final product will be able to efficiently spot a flying drone and incapacitate it safely without damaging it or causing a hazard to any nearby people.
   In the real world, as drone become increasingly used and put in the public eye, the market for an anti-drone system is very high. With many cases against drone fliers over privacy and trespassing concerns, an anti-drone system that does not damage the drone will be ideal. Usable by the government, private companies, and even for private housing, this product has great potential.
   Beyond its value as a product, this anti-drone defense system will advance the field of STEM in a great way. For the final system to work, STEM boundaries will be surpassed, as the defense system will be able to track the drone's unpredictable movements and be able to take it down safely and efficiently.


Vision System: Team 2: Adnan Khan, Noah Borel.
Intercepting System: Team 3: Eduardo Guzman, Henry Hoare.

   Groups will stay in touch via email and through mobile phone. At first, the two groups will develop each system individually. Once the systems become developed, the two groups will need to converge to integrate the two projects.


To create the vision system, we will need full understanding of several topics:
-   Physics: Optics and the mechanism of the eye.
-   Biology: Types of eyes found in nature and how they work.
-   OpenCV: Understanding how to operate the program.

   Currently, we have basic knowledge of these topics. With this basic understanding, we have various ideas of what type of "eyes" we should use for the vision of the defense system. Yet, we don't know what we are capable of until we have learned all that we can. Our learning process has just started and we plan to absorb a lot before we begin diving right in.
   As we expand our knowledge, we will know more materials that we will need. As of now, we need a Linux computer to run the OpenCV and later we will need a drone prop for our vision tests.


   For the near future, we will review the summer work that we have done. Then, we will devise a strategy to thoroughly assess the different methods of achieving the vision. Furthermore, with Mr. Lin, we shall discuss what is capable based on the materials and resources that are given. Finally, we will continue to research how to create the "eyes" and we will prepare our presentation for Thursday.

Thursday, July 23, 2015

Research Task 2 (07/27/15 - 08/09/15)

Great to see you make substantial progresses! The second phase will be focusing on Computer Vision, both in theories and practices. You will be continuing working on the OpenCV part of the previous assignment as re-described in the following. OpenCV provides you the practical tool to "do" the computer vision. I also found a series of easy-to-follow video tutorials on Computer Vision to build your theoretical background. The last part is researching and brainstorming the possible ways to implement the compound eyes system for your research project.

OpenCV
  1. Learning OpenCV: Computer Vision with the OpenCV Library (2008),  Gary Bradski and Adrian Kaehler. [chap. 1-2, 30 pages]. Get the basic idea of what is OpenCV. 
  2. You will need to download and install Ubuntu 12.04 (LTS) on your computer, or use a Linux laptop provided by school. You can also choose other OS's as your platform for OpenCV just for learning purpose. However, we will want to develop our code on Linux later.
Computer Vision (CV)
  1. Computer Vision Video Lectures, Center for Research in Computer Vision (CRCV), UCF. [video 1-3, 5-11, 13] This series of videos are covering many up-to-date topics in the CV research field and have been presented in a clear manner. You will be introduced to many terminologies, concepts, and techniques in computer vision.  Going through them carefully (even though you may not be able to understand every details) will give you a foundation to read technical papers/reports.
Implementation of Compound Eyes
    You are going to find answers for the question: "How can you build a cost-effective compound eyes vision system?" The following may be some clarifications and hints to start your journey. Be creative and find your own solutions.
  • "Cost-effective" means the total cost is < $500 (not including computer).
  • Your compound eyes system can be an approximation to the natural ones instead of trying to replicate them, i.e., your system is "inspired" by the natural compound eyes. 
  •  How can we acquire compound-eyes images/videos? Multiple cameras? Single camera with some lenses? Anything in our daily lives can give us similar optical effects? Can we rotate a single camera (or mirror/lenses) and take pictures in many directions (only good for static object?) Which software can we use?.......
* Whenever you come across good papers/websites/reports, don't forget to add them to the "Project Resource" page.

* Please take electronic notes while you are studying the materials, watching the videos, or browsing through the web.

* Make PowerPoint presentations based on your notes. We will start presentation at the beginning of the year.

Sunday, July 12, 2015

Research Task 1 - Adnan Khan


Research Task 1
Adnan Khan


Compound Eyes:
1. Microoptical Artificial Compound Eyes (2005); [chapters 1-2, 27 pages]:

Introduction:
-   Compound eyes combine small eye volumes with a large field of view (FOV), at the cost of comparatively low spatial resolution.
-   Used by various insects. Allows them to obtain information about the environment with low brain processing power.
-   High resolution not always needed. Better to have a very compact, robust and cheap vision system.
-   Limitations in modern optics reduce resolution and also sensitivity with miniaturization of typical imaging optics.
-   Artificial compound eyes involve a much larger option for materials to use versus naturally found compound eyes. But, in nature, large numbers of ommatidia can be achieved easily.
-   Ommatidia = the units/lenses that make up the compound eye.

-   Many applications for compound eyes; Able to be made on a small scale to be fit on many things, very tiny but very accurate.
Fundamentals:
-   Natural Vision: Two known animal eye types: Single aperture eyes and compound eyes. Compound eyes can be divided into apposition and superposition compound eyes. In nature, it is more efficient to capture the image in a matrix rather than one whole image. Lack of resolution is counterbalanced by a large field of view and additional functionality such as polarization sensitivity and fast movement detection.
-   Single Aperture Eye: Has high sensitivity and resolution. However, this eye type has a small field of view. Single aperture eyes must be moved around to sample the entire surroundings. Large brain/high processing power needed for comprehending all the visual information from a single aperture eye.
-   Apposition Compound Eye: Consists of an array of microlenses on a curved surface. Each microlens is associated with a small group of photo receptors in its focal plane (where the light is focused). This then forms one optical channel = ommatidium. Ommatidium = microlens. Pigments form opaque walls to prevent light that is focused by one microlens to be received by an adjacent channel’s receptor.
     -   Interommatidial Angle: ∆Φ is used to sample the visual surrounding of the insect.
∆Φ = Diameter of micro lens (D)/ Radius of ommatidia to the center (REYE).
     -   Nyquist angular frequency: Period of the finest pattern that can be resolved is 2∆Φ.

     -   Angular sensitivity function: Angular distance Φ of an object from the ommatidium’s optical axis determines the amount of response of the corresponding ommatidium. This response is shown in the angular sensitivity function (ASF).
     -   Acceptance angle: φ defines the minimum distance of two point light sources that can be resolved by the optics. 
     -   Modulation transfer function (MTF): The size of the acceptance angle in relation to Φ determines the modulation for angular frequencies up to the eye’s Nyquist frequency vs. φ is a measure of the overlap or separation of the ommatidia’s FOVs.
     -   Eye parameter: P = Diameter of microlens (D) · ∆Φ. P allows the determination of the creature’s illumination habitat. Smaller P is found in creatures in environments with brilliant sunlight that have diffraction limited eyes while a larger P is found in creatures that are nocturnal or that live in environments that are darker that have high light gathering power. Functionality of the eye determined by capturing a sufficient amount of photons to distinguish a certain pattern through sensitivity.
     -   Sensitivity: Higher sensitivity enables the same overall number of captured photons for shorter image capturing times or lower ambient illumination. The sensitivity allows for compound eyes to have a similar f-number (F/#, ratio of the lens’ focal length to the diameter of the entrance pupil/location) to single aperture eyes, allowing a much wider field of view if even smaller. The Airy disk diameter meeting the receptor diameter is the best compromise between resolution and sensitivity.
     -   Scaling of natural compound eyes: The eye radius is proportional to the square of the required resolution for compound eyes, while it the relationship is linear for single aperture eyes. Increase resolution with increase number and size of the ommatidia; eyes become too large or must show a lower resolution. For humans, a one-meter diameter compound eye would be needed for the same angular resolution.
     -   Special properties of compound eyes: Many invertebrates have areas of locally higher resolution or “acute zones” in their compound eyes; these zones are in the direction of highest interest. Due to the very short focal length, the ommatidia provide a huge focal depth. The image plane of the microlenses is located at the focal plane, independent of the object distance.  Therefore, the angular resolution is constant over many distances. The spatial resolution scales linearly with the object distance. Many channels allow for break down of image processed. Compound eyes are the optimal optical arrangements for large FOV vision. By pooling receptors of adjacent ommatidia, which have different amounts of offset from the corresponding optical axis (receptor pooling), an increased sensitivity is achieved. This improvement in sensitivity is done without the loss of resolution.
-   Superposition Compound Eye: Much more light sensitive than natural apposition compound eyes. Found in nocturnal insects and deep-water crustaceans. The light from multiple facets combines on the surface of the photo receptor layer to form a single erect image of the object.
-   Vision System of the Jumping Spider: They have single aperture eyes but have eight of them. The eyes comprise of two high-resolution eyes, two wide angle eyes, and four additional side eyes. Compound eyes not useable as they would provide too low resolution for identification.
-   State of the Art of Man-Made Vision Systems: Distributing the image capturing task to an array of microlenses instead of using a single objective, will provide a large number of functionalities for technical imaging applications. Miniaturization and simplification are primary demands.
     -   Single aperture eye – classical objective: The image size is given as the product of the pixel size and pixel number, and required magnification is defined by the focal length. Many limitations arise to create high resolution and magnification. This method exemplifies the limitation of system miniaturization of classical single channel objectives.
     -   Apposition compound eye – autonomous robots, optical sensors for defense and surveillance, flat bed scanner, integral imaging, compact cameras: Microlense Array (MLA) used with this eye type. The number of image points is in general equal to the number of ommatidia. Yet, a low number of ommatidia are used. MEMS (Micro-Electro-Mechanical-Systems) technology is used that has a scanning retina for a change of direction of view. However, high effort for mechanics put in, resulting in poor resolution.
     -   Superposition compound eye – Gabor superlens, copying machines, X-ray optics: Ultra wide FOV imaging system based on refractive superposition used to analyze by ray-tracing simulation. Applications in highly symmetric one-to-one imaging systems using graded index MLAs.
     -   Vision system of the jumping spider – autonomous robots, compact large-area projection, compact cameras and relay optics: Transfer of different parts of an overall FOV accomplished by isolating optical channels. Compact digital imaging system with segmented fields of view would be used with an array of segments of microlenses, which are decentered with respect to the channel origin.
 -   Scaling Laws of Imaging Systems: Miniaturized imaging systems for resolution or information capacity, sensitivity, and size are introduced.  
      -   Resolution and Space Bandwidth Product: First order parameters, diffraction limitation, spherical aberration, focal depths, and information capacity. For small-scale lens systems, the diffraction limit itself is the important criteria describing resolution. This limitation can be avoided by using MLAs and parallel information transfer instead of single microlenses.
     -   Sensitivity: Light gathering capability is very important to imaging systems. Various different factors affect imaging system changes with scaling. Point source, extended source, and natural field darkening. The size of the photo sensitive pixels must decrease in order to maintain resolution during scaling. By decreasing the F/# when miniaturizing an image system, resolution and flux can be maintained.
-   3x3 Matrices for Paraxial Representation of MLAs: Combination of different elements of the optical axis for 2x2 matrix discussed. 3x2 matrix formalism explicitly contains the misalignments by having a third component with is always unity is added to the ray vector.
     -   Fundamentals: Eye based on illumination environment, required resolution, and time for image acquisition. Main advantages of compound eyes are the small required volume and the large FOV. High resolution not suited for compound eyes.


2. Bio-Inspired Optic Flow Sensors for Artificial Compound Eyes (2014); [chapters 1-2, 26 pages]:
Chapter 1:
-   Motivation:
     -   Artificial compound eyes: Artificial eyes are bio-inspired that mimic a flying insect’s visual organs and signal pathways. Wide FOV used with multi-directional sensing. Implementation on robot platforms especially MAVs. Examples include CurvACE with 180° horizontal and 60° vertical FOVs and the spatial resolution bio-inspired digital camera which combines elastometric compound optical elements with deformable arrays of photodetectors. Wide-field optic flow sensing achieved with pseudo-hemispherical configuration by mounting a number of 2D array optic sensors on a flexible module found in the 3D semi-hemispherical module platform.
     -   MAVs and microsystems sensor capability gap: MAVs have limited payload, power, and wingspan, limiting eye capabilities. Yet, small flying insects have similar structural capabilities so compound eyes can be carried over. An analog neuromorphic optic flow sensor, which is implemented by mimicking the motion-sensing scheme in an insect’s compound eyes, is he only sensor to meet the payloads and power constraints.
     -   Bio-inspired wide field integration (WFI) navigation methods: Flying insects’ visual pathway is enabled with neurons in the compound eyes that interpret information about self-flight status from the wide FOV optic flow. WFI platform mounted on robot extract cues on self-status in flight.
     -   Current state of the art optic flow sensing solutions for MAVs: Integrations of optic flow sensing systems for collision avoidance and altitude control are demonstrated on some MAVs.
     -   Pure digital optic flow sensing approach: This implements computation intensive signal processing algorithms on hardware. The hardware consists of an image sensor to extract video scenes and a microcontroller (MCU) or a field programmable gate array (FPGA) to compute the algorithms. Provides very accurate optic flows but is difficult to put on MAVs.
     -   Pure analog VLSI optic flow sensors: Includes neuromorphic circuits that mimic neuro-biological architectures in an insect’s visual pathways. This includes two purposes; to implement the full insect’s visual signal processing chain as similar as possible and to have pure analog optic flow sensors selectively choose the aspects inside the insect’s eyes to effectively implement on the sensor. Pros include that it meets the power and payload constraints. Cons include that it is sensitive to temperature and process variations.
     -   Analog/digital (A/D) mixed-mode optic flow sensor: Includes a time-stamp-based optic flow algorithm. Algorithm uses motion estimation concepts from an insect’s neuromorphic algorithm and then maps the concepts into circuits. Reduces digital calculation power and minimizes static power consumption.
-   Thesis Outline: Bio-inspired optic flow sensors are customized for the semi-hemispherical artificial compound eyes platform.
     -   Visual information processing in a flying insect’s eyes: Starts on the anatomy of the insect’s compound eye and its mechanism of vision-based flying control. Then describes the elementary motion detector model that is known to estimate optic flows in the compound eyes. Continues on visual cues utilized in MAVs and their autonomous navigation.
Chapter 2:
-   Visual Information processing inside a flying insect’s eyes: Ommatidia include a lens and a photoreceptor to independently sense light. Natural compound eyes in insects have very wide FOVs and are immobile with fixed-focus optics. The eyes infer distance information from estimating motions (or optic flows). Eyes have a much higher detectable frequency range than single aperture eyes in humans but with the cost of resolution.
     -   Anatomy of an insect’s compound eyes: The compound eyes consist of three main optic lobes that are each consisted of three types of neuropils or gangalia, which are a dense network of interwoven nerve fibers and their branches and synapses; the lamina, medulla, and lobula complex. Lamina: Right beneath the photoreceptor layer and receives direct input from them. It provides gain control functionality that insures a quick adaption to variation in background light intensity. Medulla: Includes very small cells. Sends the information from the lamina to the lobula complex. Lobula complex: The convergence of the optic area. Information from photoreceptors converges onto cells in the lobula plate. These cells are the tangential cells. Then, the information is transferred to higher brain centers and to descending neurons to motor centers in the thoracic ganglia.
     -   Elementary motion detector model: Model based on the natural design of the specialized processing capabilities of visual motion in the medulla. Motion detection model named the correlation elementary motion detector (EMD); it is based on correlation of the signal associated with one ommatidium with the delayed signal from a neighboring ommatidium. The EMP uses the spatial connectivity, delay operation, and nonlinear interaction of the correlator to produce directional motion sensitivity and motion-related output without computing derivatives.
     -   Wide-field optic flow analysis: lobular complex: Image information cannot be directly retrieved at the local level and optic flow from various regions of the visual field must be combined to infer behaviorally significant information. Spatial integration takes place after the medulla in the lobular plate where tangential neurons receive input from large receptive fields.
     -   Bio-inspired autonomous navigation: Method extracts control parameters by integrating wide-field optic flow and by several matched filters; this is then tuned to be sensitive for optic flow patterns induced by a robot’s specific movement.
     -   Summary: In this chapter, the vision processing in an insect’s eye is described. Three optic lobes, which are lamina, medulla, and lobula, perform temporal high-pass filtering, motion detection, and wide-field optic flow integration in each lobe. The integrated optic flows by a matched filter provide self-motion, depth and speed information for a flying insect’s flight. A 3 degree-of-freedom (3 DOF) autonomous navigation control scheme, which utilizes the extracted motion by WFI, is also described.

Object Detection Using Compound Eyes:
1. A new method for moving object detection using variable resolution bionic compound eyes (2011):
-   Introduction: An image filter with variable resolution and a proposal for using variable resolution double-level contours is discussed. First, the double-level includes the contour detection for lowest resolution images. Then, the texture gradient computation is done that carries out contour segmentation in the high-resolution image acquired by the single eye. Lastly, a Geodesic Active Contour model for texture boundary detection is applied so the moving target is recognized. New method was created to reduce the complexity of the 3-2-3-dimension single camera capture technique. Contour detection and variable resolution are important characteristics of the compound eye.
-   Imaging mechanism of ommatidium: Compound eyes consist of the large field system and small field system. The large field system detects global motion while the small field system detects moving targets accurately.
-   Compound eyes multi-resolution imaging: The facets of the compound eyes have variation in concentrations depending on location. The ommatidia with low resolution in sparse areas can detect changes in motion, while only requiring a small amount of data. Ommatidia with high resolution in dense areas can accurately identify objects. An elliptical projection of the ommatidium was used to obtain information within the oval region.  Gaussian-weighted center of gravity used to improve de-noising when gravity is used to extract the information from the elliptical image center.
-   Image filter with bionic compound eyes: Different distribution densities in the eyes allows for different types of filters to be used to process the images. Small Gaussian filters used for sensitive areas and larger Gaussian filters used in the denser regions. Noise always has an effect on the image. Filtration is thus done on the several compound eyes projection images with the Gaussian function to then extract information.
-   Description of the Detection Algorithm with Variable Resolution: Very few bionic compound eyes have been made. Methods used use algorithms of moving object detection using variable resolution double-level contours.
     -   Level 1: Contour Detection: First a low-resolution remotely sensed image is processed. Image edges (contours) are detected with Canny method. Then, minimum area matrix is calculated that would include the contour.
     -   Level 2: Contour Segmentation with GAC Texture Gradient Model: Next step includes the action of processing a high-resolution remotely sensed image with the same surface features. A GAC gradient flow model based on texture gradient is used.
-   Results: Profile of a ship in water extracted from low-resolution image. Red rectangle calculated to fit with minimum area on the profile. At first, low-resolution images only needed for profile detection. Then, algorithm processes high-resolution images. This allows the algorithm to compute the smaller area to accurately extract the target. This method provides improved computational efficiency and data handling.

-----------------
-   The concepts of the compound eyes are relatively easy to understand, but the math and actual implementation after the concepts seems complicated. Nevertheless, I see that the use of compound eyes in our drone defense robot will be good as the eyes can be very efficient.
-   I have made progress with the compound eyes and the object motion detection. I have yet to complete the OpenCV projects. I will be working on these soon. 
-----------------