Saturday, September 26, 2015

RE: GanttProject Plan: 9/24/15 - 11/8/15

Feedback:

  1. The activities listed in your Gantt Chart mainly reflect an organized reading plan of the technical info in your Project Resource page rather than a series of tasks or steps to solve your problem. Though solving your problem involves lots of reading and studying, they should be organized around your problem solving steps instead of being a list of independent activities. Imaging following the current Gantt Chart, at the end of this period, you might be able to run 3-4 image processing tasks (they may or may not related to your project directly) using OpenCV functions by following the book/tutorials using a language/platform that may or may not be your final language/platform. You will gain some general knowledge of computer vision and object tracking, and they are definitely helpful, but you can only know a little of everything and without any focus/depth. The main problem is that, through the whole plan, you have not even start solving your problem yet. Taking a STEM research course is totally different from taking a course such as Calculus. You learn calculus by learning it chapter by chapter (since your focus is "Calculus"), without really care about any real world problems. (Even though all the math textbook will emphasize/claim the real-world applications, most of the examples are either fake of non-realistic.) Learn research should focus on the "research problem". Every learning activity should attach to a problem solving step, and lead to a measurable result. I hope you can start seeing the fundamental difference between them.
  2. With that said, you might notice that your Gannt Chart needs to have some important activities, for example, determining the compound eyes mechanism,  designing system architecture, developing algorithm to detect the drone(s) in an ommatidia image, developing algorithm to track the drone(s) in compound eyes, calculating the location and speed of the drone, etc.. Under each main activity, there will be lots of focused reading, studying, designing, implementation, testing, documentation, etc..

Thursday, September 24, 2015

GanttProject Plan: 9/24/15 - 11/8/15

Below is a link to the google drive file containing the GanttProject with our plans for
9/24/15 to 11/8/15:

GanttProject Schedule 1


Wednesday, September 23, 2015

Presentation (9/17/15)

Access presentation via the link below:

Presentation on September 17th, 2015

A project example combining biology and robotics

Just for fun. It's a Google Science Fair 2014 winner, a 15 years old high school student. Combining the behaviors of a fruit fly with his robot and drone.

Tuesday, September 22, 2015

RE: Camera Ideas (9/20/15)

Good search, and it shows you various possible ways/aspects of your project. You should also update the Project Resource page with those information! There are a few concerns/questions before you move ahead and order the cameras.
  1. Have you explore the possible optical solutions for the compound eyes? 
  2. Compound eyes images are supposed to be at much lower resolution. Can we get cheap low-resolution cameras connected to the same computer?
  3. What is the difference of FOV between omnidirectional (360 degree Spherical Panoramic camera, Ricoh) and "compound eyes" camera (emulated by Amcrest)?
  4. What are the algorithms people have used to detect and track moving object(s) using omnidirectional camera?
  5. How do you integrate the videos from regular camera and infrared camera?
  6. What are the overall system architecture (including hardware, software, communication)? How will the cameras connected together and process the real-time videos?
  7. Does the mobile phone have enough computational power to process the video in real-time?
  8. What will the final system look like? Can the work done on the current/temporary system be transferred smoothly to the final system?
  9. Before ordering any camera, a thorough survey and comparison on the similar types of cameras should be conducted. 
There are some links in the Project Resource page addressing some of the aforementioned problems. You might want to take a look and expand the broadness and depth of your research.

Sunday, September 20, 2015

Camera Ideas (9/20/15)

Camera Ideas
9/20/15
Adnan Khan


   There are no available compound eye cameras on the market. Because of this, the initial idea would be to have several pictures taken to form one complete image or a 360° panoramic picture taken. This will be done with pictures form a single aperture camera.

On the market:


  • Cameras for Compound Eye Imitation: 

   -   Amcrest ProHD 1080P WiFi Security Monitoring System:
       -   Wireless pan/tilt/zoom ("Intelligent Digital Zoom").
       -   Mounted camera with 90° field of view.
       -   Dimensions: 5.0 x 4.0 x 4.0 inches, .5 lb.
       -   32ft Night vision range using 12 IR LEDs.
       -   Relatively cheap: $120.
Link:
http://www.amazon.com/gp/product/B0145OQXCK?gwSec=1&redirect=true&ref_=s9_simh_gw_p421_d0_i1

   -   Ricoh Theta M15 360° Spherical Spherical Panoramic Camera:
       -   360° spherical panorama picture capture.
       -   Two lenses: One on each side of the device.
       -   Relatively expensive: $300.
       -   Dimensions: 0.9 x 1.6 x 5 inches, .25 lb.
       -   Wireless connection to smartphone devices.
Link
http://www.amazon.com/Ricoh-Theta-Degree-Spherical-Panorama/dp/B00OZCM70K/ref=sr_1_1?s=photo&ie=UTF8&qid=1442768015&sr=1-1&keywords=360+degree+camera


  • Cameras for Infrared/Thermal Detection:

   -   Seek Thermal Imaging Camera for Android/iOS Devices:
        -   Attachable to devices: Android USB and iOS Lightning ports.
        -   Small/compact: 1.6 x 0.8 x 0.6 inches, 1 lb.
        -   Relatively expensive: $250.
        -   Very Sensitive: Heat detection from -40°F to 626°F.
        -   36° field of view.
Link:
http://www.amazon.com/Seek-UW-AAA-Thermal-Imaging-Connector/dp/B00NYWAHHM/ref=sr_1_5?ie=UTF8&qid=1442768741&sr=8-5&keywords=infrared+camera


RE: Initial Planning & Coordination

Initial Project Planning/Coordination
Adnan Khan and Noah Borel

   Over the course of the next year, we will be working to create an anti-drone defense system. We, Team 2, will be working on the vision segment of the system while Team 3 will be working on the physical intercepting segment of the system. As we work on these parts of the project, the final product will be able to efficiently spot a flying drone and incapacitate it safely without damaging it or causing a hazard to any nearby people (or buildings).
   In the real world, as drone become increasingly used and put in the public eye, the market for an anti-drone system is very high. With many cases against drone fliers over privacy and trespassing concerns, an anti-drone system that does not damage the drone will be ideal. Usable by the government, private companies, and even for private housing, this product has great potential.
  • The original goal does not require "not damaging the drone" (it's good to have, but not required), but not damaging the buildings or hurting people in the intercepting process. This goal has implications to the material and speed of the projectile.
   Beyond its value as a product, this anti-drone defense system will advance the field of STEM in a great way. For the final system to work, STEM boundaries will be surpassed, as the defense system will be able to track the drone's unpredictable movements and be able to take it down safely and efficiently.

Vision System: Team 2: Adnan Khan, Noah Borel.
Intercepting System: Team 3: Eduardo Guzman, Henry Hoare.

   Groups will stay in touch via email and through mobile phone. At first, the two groups will develop each system individually. Once the systems become developed, the two groups will need to converge to integrate the two projects.
  • Between team 3 and your team, you should define the hardware/software interface between your two systems clearly early on. Otherwise, it won't be possible to integrate two systems together at the end! It includes things such as 
    • architecture: are both systems controlled by a single microcontroller, or both of them have their own microcontrollers and communicate to each other through some communication channel (and what's the physics communication channel?).
    • What are the data (e.g., distance, 3D angles), clock, and control signals between the two system?
    • What are the timing requirements (e.g., target location data needs to be updated every 1ms) for the communication?  

To create the vision system, we will need full understanding of several topics:
-   Physics: Optics and the mechanism of the eye. (especially the optics of compound eyes and how to construct them.)
-   Biology: Types of eyes found in nature and how they work.
-   OpenCV: Understanding how to operate the program.
  • You will need basic knowledge of image processing, pattern recognition, computer vision, motion tracking in the compound eyes context. The OpenCV probably cannot cover the compound eyes function. You need to develop algorithms to handle it.

   Currently, we have basic knowledge of these topics. With this basic understanding, we have various ideas of what type of "eyes" we should use for the vision of the defense system. Yet, we don't know what we are capable of until we have learned all that we can. Our learning process has just started and we plan to absorb a lot before we begin diving right in.
   As we expand our knowledge, we will know more materials that we will need. As of now, we need a Linux computer to run the OpenCV and later we will need a drone prop for our vision tests.
  • You need to focus on the compound eyes mechanism as soon as possible since it may imply the needs of buying camera(s) and optical lenses, and the platform requirement to run the algorithms. 
  • If you consider to involve infrared vision, we also need to buy the infrared camera.
   For the near future, we will review the summer work that we have done. Then, we will devise a strategy to thoroughly assess the different methods of achieving the vision. Furthermore, with Mr. Lin, we shall discuss what is capable based on the materials and resources that are given. Finally, we will continue to research how to create the "eyes" and we will prepare our presentation for Thursday.
  • Your research should not limited to "the materials and resources that are given" since we will purchase equipment/materials as needed (if affordable).

Sunday, September 13, 2015

Initial Project Planning/Coordination

Initial Project Planning/Coordination
Adnan Khan and Noah Borel


   Over the course of the next year, we will be working to create an anti-drone defense system. We, Team 2, will be working on the vision segment of the system while Team 3 will be working on the physical intercepting segment of the system. As we work on these parts of the project, the final product will be able to efficiently spot a flying drone and incapacitate it safely without damaging it or causing a hazard to any nearby people.
   In the real world, as drone become increasingly used and put in the public eye, the market for an anti-drone system is very high. With many cases against drone fliers over privacy and trespassing concerns, an anti-drone system that does not damage the drone will be ideal. Usable by the government, private companies, and even for private housing, this product has great potential.
   Beyond its value as a product, this anti-drone defense system will advance the field of STEM in a great way. For the final system to work, STEM boundaries will be surpassed, as the defense system will be able to track the drone's unpredictable movements and be able to take it down safely and efficiently.


Vision System: Team 2: Adnan Khan, Noah Borel.
Intercepting System: Team 3: Eduardo Guzman, Henry Hoare.

   Groups will stay in touch via email and through mobile phone. At first, the two groups will develop each system individually. Once the systems become developed, the two groups will need to converge to integrate the two projects.


To create the vision system, we will need full understanding of several topics:
-   Physics: Optics and the mechanism of the eye.
-   Biology: Types of eyes found in nature and how they work.
-   OpenCV: Understanding how to operate the program.

   Currently, we have basic knowledge of these topics. With this basic understanding, we have various ideas of what type of "eyes" we should use for the vision of the defense system. Yet, we don't know what we are capable of until we have learned all that we can. Our learning process has just started and we plan to absorb a lot before we begin diving right in.
   As we expand our knowledge, we will know more materials that we will need. As of now, we need a Linux computer to run the OpenCV and later we will need a drone prop for our vision tests.


   For the near future, we will review the summer work that we have done. Then, we will devise a strategy to thoroughly assess the different methods of achieving the vision. Furthermore, with Mr. Lin, we shall discuss what is capable based on the materials and resources that are given. Finally, we will continue to research how to create the "eyes" and we will prepare our presentation for Thursday.