Monday, July 31, 2017

Day 18

  I pretty much finished my codes today.
  In the morning, I met Dr. Pelz and he suggested me write up a brief explanation about my code.
  In the afternoon, we went outside to take some videos about our eyes when wearing eye tracker. After that, I read something about basic machine learning.
  It sure is a busy day today!

Sunday, July 30, 2017

outline

Outline
Slide1:  (title) Cosmic Ray Damaged Images Repair  (picture1)Damaged (picture2)After repairing
Slide2: (title) summary of the presentation
What is cosmic rays and how do they affect the images?
challenges we are facing
brief history about previous works
(*)The algorithm of removing the hot pixels
(*)How the algorithm works.
Possible applications
Slide3:  (title) How do Cosmic Rays Damage Images?
The images taken by us on the earth won’t be affected by cosmic rays because the atmosphere blocks most of the cosmic rays. However, in the space there are many kinds of cosmic rays existing. We can’t see cosmic rays directly, but their energy is high enough to cause damage on the space camera. The damage may cause the resulting images to have hot pixels on them.
Image of hot pixels.
Slide4: (title) Challenges
1.       Distinction between hot pixels and stars (image).
2.       Removing hot pixels from solar panels. (image)
Silde5: (title) previous work
Kevin Moser from RIT did an incredible work on this project. (with an Image)
Limitations(maybe)
Slide6: (title)algorithm—brief whole
….
Slide7: (title)algorithm sub-step1
Slide8: (title)algorithm sub-step2
Slide9: (title)algorithm sub-step3
Not sure how many slides will be used for sub-steps.
Slide 11:
(title)Comparison between the image before with the image after.
Two images
Some statistics. Chart true positives, true negatives, false positives, false negatives
Slide 12:
(title)Applications.

Make the images look better. It can also help to clean the noise so we can see the stars more clearly.

Day 17

  Dr. Pelz is not here today, but he gave me many suggestions yesterday.
  I tried his suggestion today and it worked pretty well. Although I'm not sure if it will work on all kinds of images, I decide to keep it as it works on the images I'm working on. I haven't started my outline yet, but I don't think it will take too much effort.
  In the afternoon at around 2pm, I felt sleepy again -- maybe because it's kind of warm in our room. I went for a walk and drank some water, and they worked out significantly.
  I think I'm almost done with the project, but I still have more things to do and I'm still looking for a more convenient way to achieve it.


Friday, July 28, 2017

Day16

  The most exciting day so far!
  In the morning, I wrote more code about my algorithm and read some paper about machine learning. After lunch, I went to the group meeting and "drove" a car on a simulator. It was the best game I've ever played in my life: real car, "real" traffic, real sound.
  After that, Dr. Pelz shared with me his way of removing hot pixels. Although it's more convenient and easy to code, his program met the same problem as mine -- they are not able to effectively remove the hot pixels on the solar panels. Nevertheless, Dr. Pelz gave me some really valuable suggestions and they can improve the algorithm very much.
  The observatory at night was overall satisfying. Although we were not able to actually observe any stars because of the cloudy sky, we had a wonderful meal and we learned a lot about both the history of the observatory and how an eclipse happens.
 

Wednesday, July 26, 2017

Day15

   I'm at half way through the internship.
   In the morning, we shared the projects we are doing. It seems that most of other interns had accomplished a lot on their projects. I tested different thresholds on the algorithm, and chose a suitable one and sent it to Dr. Pelz to see if it works. I also helped Jeremy with his Pupil Lab calibration.
   In the afternoon, I was trying to increase my algorithm's accuracy. I tested new method and I haven't finished coding yet. I helped Jeremy test both the Pupil Lab calibration and the Positive Science calibration.
   I forgot to take my lunch today. Instead, I had Pizza for lunch. How lucky!

Tuesday, July 25, 2017

Day14

  First thing of today is finding a way to read h5 files. After searching online, I found some libraries that support reading hdf5 files. However, they only seem to be friendly to Visual C++. While I was thinking if I should try to move to Visual C++, I found a software which is published by hdf5 company saying that it can read hdf5 files. After actually looking at it, I found it more powerful than it looked like: it can export hdf5 to text files. Although using C++ to read text files is not that fast, it's now the best way for me to at least read a raw image.

  After lunch, I started to write codes for it. After about half an hour of coding, my code gave me the result I wanted which was kind of out of my expectation. Then I was kind of wandering around and reading more tutorials. Then Dr. Pelz came in and we looked at the resulting image. Dr. Pelz seemed to be excited and he immediately asked me to perform Bayer filter on it. After doing some research, I performed Bayer filter on the images. I'm still a little afraid that the program is not able to effectively keep all the original features when removing most of the noises.

   I will try changing some of the constants tomorrow and see how the program works out.

Day 13

  This morning, Dr. Pelz met with us and he patiently and carefully helped me make the algorithm more clear so I can start implementing the algorithm on raw images.
  I don't know why C++ makes it so hard to read and write raw images. I spent so much time trying to figure out how to read a raw image. In the end, Dr. Pelz exported the raw image as a hdf5 file for me to read when I was about to learn Python. What I'm going to do is to modify my previous code that can only run on tiff images so it can directly run on NEF images. I'm going to have a lot of code to write I can't wait to see the result!

Friday, July 21, 2017

Day 12

  It was a short morning session, and we didn't talk about anything particularly important.
  In the morning, I combined Dr. Pelz's idea and the method I used before on jpg files together and how well it worked on the dead pixels. As there are too many variables for different images and it seems like the way I used before can only be used to preserve the useful pixels, I decided to update the method I used before on jpg files.
  In this afternoon, I was trying to think up a new way of detecting, and I'm still estimating the availability of the method. Hopefully, I will see the result on Monday!

Thursday, July 20, 2017

Day11

  We successfully finished reading all of our abstracts today!
  As usual, I met Dr. Pelz in the morning and we talked more about the CRDIR project. He had some work for me to do.
  In order to have a good plan for what to do next after this project, I went to the REU meeting and the group meeting. Both include incredible presentations. I'm going to finish my tutorial tomorrow, hopefully, and start my next project next week.

Day10

  In the morning session, we read more people's abstracts and gave suggestions to them.
  I met Dr. Pelz after that. He told me that he found a function in Rawpy library that can actually detect dead pixels by using two sentences. It seems like both Kevin and I did not do enough reading before starting our project. I started to read some "literature" and see what others had done on this project.
  It seems like my project is going to end. But I still want to finish this with a way that does not require multiple pictures. I started to debug the code I've done and added more constraints on it. Now it works pretty well and the method can be used on both raw and jpg images noise and defective pixel clearance.

Tuesday, July 18, 2017

Day9

  In the morning, we shared several abstracts and I got valuable suggestions from both peers and Joe.
  I met Dr. Pelz in the morning and he had a really cool idea relating to the project I'm doing. After absorbing his idea and algorithm, I started to write code for it. The code itself worked well. However, I used a wrong way to convert NEF files to Tiff files, which took me a lot of time to debug. Hopefully, this will work tomorrow.
  After figuring out the error of the conversion of NEF files, I started to think about our former method which doesn't require more than one frames. I think I can figure both ways out.

Monday, July 17, 2017

Day 8

  In the morning session, I shared my project and abstract with the group.
  After that, I proofread Kevin's paper again. Although his used kernels well, I still had some confusions about what kinds of defective pixels his algorithm can clean.
  After lunch, Ronny joined my lab. We had several long nice conversations with Dr. Pelz about both the effective ways to read raw images and the algorithm Ronny and I designed last week. Dr. Pelz had many helpful suggestions towards our algorithms and our algorithms both ran faster and found targets more accurately.
  Our next step is actually implementing our algorithms on raw images. We are also trying to update our algorithm to recognize the defective pixels more accurately.

Sunday, July 16, 2017

Abstract

Particles that bombard the Earth from anywhere beyond its atmosphere are known as cosmic rays. Although we can’t see the cosmic rays or feel it, it does exit outside the atmosphere and it may cause people who work at space some trouble, one of which is the damage to the pictures taken by the digital cameras. If you zoom in the pictures taken by NASA, you may find some defective pixels which are damaged by cosmic rays. Those defective pixels will not only reduce the quality of the pictures, but will also distract the analysis of the stars. Cosmic Ray Damaged Image Repair requires to come up with an effective way to clean up as many as defective pixels while keep as much as effective pixels unchanged. We will both analysis briefly about summarized previous methods and the new way of detecting and cleaning defective pixels along with proofs and evaluations.

Day 7

  I learned a lot today. In the morning, we had a small lab meeting. First, Iyus made a short presentation about his Iris detection lab. Dr. Pelz gave him some valuable suggestions about the priority of detections. After that, Dr. Pelz showed us his previous version of eye trackers and the way they work. They record the gradient vectors of each pixel and see them as unique. Dr. Pelz also shared his next possible version of eye tracker with us.

  In the afternoon, we worked individually. I started writing my Cosmic Ray Damaged Image Repair project and made some progress. Dr. Pelz also sent me Kevin Moser's email because he was a former student who made progress on this project.

  I started to write the abstract for my paper and I hope I can make more progress on my project!

Thursday, July 13, 2017

Day 6

  Today was an important day for me because I finally decided what I'm going to do this summer.

  I met Dr. Pelz in the morning and he helped me a lot on project selection. NASA, several years ago, provided us with an opportunity about Cosmic Rays Damaged Image Repair. Kevin, who was a former student who studied at RIT, came up with a nice algorithm which was able to detect most of the damages. However, his program was not perfect. Recently, people in NASA found that some of the damages were not so easy to be detected by Kevin's algorithm.

  Because of the limit of available resources on this project, I started with Kevin's paper first. He used kernels with an artificial threshold to detect. His Matlab codes were also readable. After reading his codes, I had a brief view about the topic I'm doing. Kevin was successful without tons of cascade classifiers. My main goal is to update his programs or to come up with a better way of detection.

   

Wednesday, July 12, 2017

Day 5

  Nice day!
  Funny video in the morning with sound!😃
  It was supposed to have a boosting camp this morning. However, it didn't come until the end of the day.
  I was trying to find a way to detect the direction a person is looking without using an infrared LED light. After looking at Iyus' apple for a while, I didn't find anything not right without an infrared LED light. When I asked Iyus the for the reason, he showed me some frames from the videos we took during the past days. It did work with light color iris pupil detection(like Jeremy's). However, when he showed me the pictures of his and my dark eyes, we couldn't tell which parts belong to the pupils. It seems that it's impossible for us to detect pupils without using LEDs, so I started to work on programming.
  Today's learning was kind of meaningful to me. I learned more techniques about imaging processing and other good stuff.
  It was the end of the day until I noticed that I haven't got a good project to work on. Maybe I'll go to see Dr. Pelz tomorrow and see if he has any gift for me!
  Day ends.

Tuesday, July 11, 2017

Day4

   Another busy day.
   I started looking at the codes the author of one of the paper wrote for eye tracking technique. It was a 4000-line code!!! However, the author did really well writing clearly with comments. He even used the function names that are consistent with which on his paper. After browsing the code briefly I found that he even wrote the functions with the order he described on his paper. How nice he is! Then I started reading the codes with the paper. The main reason I started reading his code was that he did not write too much on his paper--maybe he wanted us to read the codes with the paper.
  I spent the whole morning reading his code and trying to understand it. For lunch, I used the microwave for the first time to heat up my pizza.
  In the afternoon, I started writing my own codes with a tutorial. Everyone was sleepy in the afternoon so we went to the cafeteria and had some hot chocolate. I spent much time figuring out my code in the afternoon, and I found the mistake until the end.
  Day ends.

Monday, July 10, 2017

Day3

  New week new start!

  After two days of working, I learned briefly about the eye tracking techniques and the algorithms we can use to track eyes. On the way to the next level, I also want to start with more practical experience.
  I start with the installation of Opencv on Eclipse Neon3. After searching for installation tutorials(wow) online, I got a tons of tutorials including videos and pdfs. It seems easy to install by first glancing at the procedures. However, it was actually time consuming. The building steps, especially, took me nearly an hour.
  As usual, I took another video with Iyus and Jeremy. I also flied the drone with Jo. It was not so exiting today not only because it's Monday today, but also because of the time consuming installations. Nevertheless, I read papers while I was waiting and learned a lot from the conversation with Dr. Pelz.

  By the way, I'm looking forward to the presentations on Wednesday!😉😊👍

Friday, July 7, 2017

Day2

  It was surely more difficult to get up in the morning today. With the help of traffic, we arrived RIT on time. After some brief morning announcements, we started for our labs.

  Jo and Dr. Pelz were in the lab earlier than I did. They were discussing the new project we got today: It seems that the government is interested in money printing recently, and they want us to track people's eyes when they are examing fake cash.

  It did take us a lot of effort throughout the day to set up two cameras in order to get a satisfying video, which is supposed to include the whole process of manual money checking. It was a kind of fun to just watch Jeremy checking the money. Since the batteries of the cameras died quickly, we have to stop in between waiting for them to recharge. The eye-tracker worked well and it was pretty accurate.

  In the afternoon, I finished learning the way of tracking pupils by reading a tutorial with the help of Jeremy. After that, I went to make several new videos for Iyus's Iris project. Although we were really busy today, we did learn a lot and we're looking forward to the coming week!



Thursday, July 6, 2017

First day of work!

    Today is the first day I go to RIT Imaging Science Internship. Everyone was excited about the job they're going to do in the next few months. The schedule was having all of us meet in the reading room at 8:45 am, then doing some team-building activities, after which was lunch. In the afternoon, we went to our own groups and met our partners and professors.
    To tell the truth, it was so hard to get up so early in the morning after being out of school. But out of excitement, I arrived Carlson Building 10 minutes ahead. After a brief introduction and some rules declaration, we gathered at a red barn which is much older than us.
    Tom, the coach who directed us to do the team building activities, was a nice guy who thought up really interesting games so we wouldn't get bored. He was also a smart guy from the scenarios he imagined for the games and the ways he used to actually let us learn something from the games comfortably.
    We had pizza for lunch. I have to say, RIT people are so generous that we got twice as much food as we needed.
    Our real learning and interns began after lunch. I met my professor Dr. Pelz who later introduced me to his fellow students who are also working on visual perception(eye tracking) this summer. I interacted a lot with the students during the next three hours.
    I first went to take several videos about eye movements of people with different eye colors with Jeremy and Iyus. Iyus and I got brown/black irises and Jeremy got blue ones. It sure was a sunny day outside Carlson building, and the light was too bright for us to even keep our eyes open.
    Out of curiosity about what is lab is really about, I "interviewed" all three of the students in my lab during the rest of the time. They were really nice and friendly, and I learned a lot from them.
    I "interviewed" Iyus first--as he sat next to me--about the project he was doing. It was then that I realized that the videos we took were actually working for him. He's doing something really complicated about the perception of pupils. He was trying to write a program that can accurately find the position of pupils so he can use it to find the movement of the eye. The main challenge is that sometimes the irises cannot be totally seen by video cameras. Things like eyelid may block the view of them.
   The next person I "interviewed" was Jeremy. He is currently doing a very important part of the process -- interpret and examine the codes from an online open source. It was important because it directly decides whether the device we use to percept the position our likes are looking will work accurately. If not, we may need to design a better one.
   The last person I "interviewed" was Jo. She was talkative but concentrated. She was doing a thing which I found fun--to find the movement of eyes when a people feel scared or surprised. By doing this, she actually bought a drone online with a controller so we can try it before practice on actual video games. She faced a serious problem today. In order to connect two cameras without using wifi or bluetooth, she has to find a proper connector that can work. The unfortunate thing was that the original connector did not work today. She is still brainstorming a way to connect those two cameras so they can share the same internal structure. (I'm not sure what's that called).
   After learning more about the lab I'm in and the things people around me are doing, I have a better plan for this summer. I'm going to start learn the things they've already learned in college and hopefully I can assist them on the projects they are doing!

Day 29

  In the morning, we had our practice presentation. Out of my expectation, my timing was actually pretty well.   Right after I presented in...