Super large backlog update

with No Comments

Okay, so a lot has happened over the last month!

Early this month I started to dive a bit into how we are going to stitch our images together. I approached this through OpenCV so that I have access to python libraries like numpy. At its core this is taking a directory of images as an input, detecting key points, matching descriptors and features as a vector, before processing them through homography and warping the images to fit. The results are PRETTY good right now, but could be better. Ive since spent quite a bit of time working with the code getting it to accept more images on the first run. skalanes_stitch_1-copy

The Earlham College poster fair was also earlier this month and I wrote up and designed the poster detailing some highlights in our research, a pdf of the poster can be found here –> uas-poster-2016.

I suggested that Vitali and Niraj use OpenCV for object detection of archaeological sites. Ive also started to look into the different ways in which things like animal activity and vegetation health might be done through computer vision as well. Interestingly enough, vegetation is not only easy for computer vision to find but to also get rough calculations of how nitrogen heavy the plants are.

Ive continued to work with UGCS’s open software packages. I recently got integration of custom map layers (i.e., GIS) to appear in the flight program. As the majority of the sdk comes undocumented it feels a lot like reverse engineering and takes more time than I would like.

This weekend I had the intentions to fly a programmed flight over our campus to make a high resolution map that Charlie had asked me to do. I’ve since run into networking problems though and cannot proceed further until I get them figured out. I cannot get an uplink to the UAV but can receive download and telemetry data. This is mostly a networking error I suspect but one thats continuing to confuse me.

Moving forward: First off my plan is to get the ground station working correctly again and this is my highest priority. Second off, while Niraj and Vitali handle the computer vision software I will start familiarising myself a bit more with QGIS.

Script for batch conversion and algorithms to find rectilinear shapes

with No Comments

This week, I and Vitalii started off by writing a bash script that takes a set of images from a directory, converts them (one at a time) and saves the converted images in a sub-directory or any other directory. The converted files are saved as TIFF and will have the same file name as the original. Because a single image takes more than 3 minutes to convert using ImageMagick, a batch of image could take quite a while. So, for now we will be using the nohup command to run the conversion in background but will work on parallelizing the conversion which should not be difficult considering that it is embarrassingly parallel.

For the rest of the week, we looked at algorithms which could be useful to detect rectilinear shapes from aerial images. We found that most of the algorithm, for accuracy, take multiple aerial images of the same region, often shot from different angles, to determine any man-made objects on the ground. However, we also found a convincing research article which uses Boldt Algorithm to find rectilinear shapes in aerial imagery.

Up next, analyzing aerial images from Skalanes

with No Comments

Vitalii, Niraj and I talked about how to proceed with the aerial images from Skalanes. For now we are going to focus on two aspects, identifying possibly anthropomorphic surface features and measuring the extent of the lupine. There are a couple of algorithms that look promising for feature ID but they require stereo images. We will look into doing that next year. There is also at least one approach that uses mono images that they will start with. The automated image conversion scripts they wrote will be used to tune the input characteristics of the images for the algorithm(s). We haven’t decided on an approach yet for the lupine but I did have an idea for doing it based on a color map seeded with human input. It’s on the back of an envelope in the Hopper lab…

I guess this is why they call it research

with No Comments

Starting off, this week has been spent reading and studying up on the current tech thats useful for us. The pun is fully intended in the title of this as this is maybe the 5th time I’ve had to look at all the new technology and advances made (damn you Moore’s law).
Looking into a more advanced LiDAR system had me sifting through nearly all viable models that fit our budget. Sample rate, beam power, light shielding, and mass were a few of the deciding factors that went into looking at different models.

  • A high sample rate allows for the LiDAR to be traveling faster and not suffer a loss of resolution.
  • The power output of the laser beam itself is important to get greater distant, our original system only had a usable distance of ~1.5m above the ground in broad daylight.
  • light shielding, though it can be added later, is best done built in as close to the laser output as possible. The shield itself allows through only the recognisable light of the laser receiver and blocks out as much UV light as possible.
  • Because the goal is to have the LiDAR mounted on the UAS mass needs to be taking into consideration.

After making up a list and handing it over to Charlie to look over he submitted it to the scientific equipment fund.

I also took the time this week to put together a new abstract for our research. Since first starting on the research our knowledge and capabilities have expanded drastically and I felt our abstract should reflect this. As of today we submitted to be a part of the Earlham College “Natural Sciences Division Undergraduate Research Poster Conference.”

Back into the swing of things

with No Comments

Now a few weeks into the new semester its time to stop procrastinating and get back to work!

Ive started off by essentially doing what seems like mass updating of hardware, software, firmware, etc. The DJI drone system and ground control software both had to be brought up to date. Initially there were a lot of errors in getting them talking to one another again but as of Wednesday this week they are fully happy and working properly. I’ve also taken the chance to update the plug-ins and various parts of the website. Ive also made accounts on the blog for Vitalii and Niraj who will be joining us by working on some of the back end coding.

In an effort to stream line our massive photo cache I’ve copied all photos onto our media server here on campus to allow us to situate things. After talking with Charlie and Erin we have decided to sort initially by day and then by capture device. This means though that we have to map all photos to their correct day which has proved to be more difficult then I thought… If the EXIF data on the images doesn’t contain a date – the date is just set to be the most recent “creation date” which means when I copied them. Thats left me and Erin with a whole lot of images that need to be sorted and sifted by hand still. Im also still finding places where images are stored, such as the google drive, etc, and have been working on getting all of those to the server as well. Its sort of like a  big game of cat and mouse and is tedious at best.

With Niraja and Vitalii joining us I have also been doing my best in catching them up to speed on all the projects and details there in. Much of this involves working with the UAV images and how to post process them. Much of this is trying it out on proprietary software and then figuring out to best do it in large batches and with open source programs. Ive most recently been trying to figure out if its possible to get a 3D topographic image without needing to apply a LIDAR point cloud layer. While not as accurate it would certainly be quick and would be used more as an approximation.

As time becomes available, Ive also started going through all the video footage that we have saved and started to throw together a very (VERY) rough storyboard for the documentary. Im hoping to have a solid chunk of time this weekend to work on this a bit more. Really I’m just focused on getting some generic shots worked together so that we can use them as filler for voice over and context.

As things move along I’ll keep posting! Cheers!

Field soil platform

with No Comments

I spent some time yesterday resurrecting the soil humidity & temperature and high temperature water platform that Ivan, Kristin, and I originally made (and re-made) in 2013 and 2014. I was able to find all the parts, re-assembled it, found an Arduino sketch in the Sparkle Share folder, and viola it worked. The wiring and sketch are documented here.

IMAG0446

Back to list management…

Data update

with No Comments

This past week, I worked on getting some database views ready that would show us the results for the sampling day. This is so each evening in Iceland, we’re able to effectively and quickly QA the data we have and check to make sure what we have matches with what we should have. This is now set up so that we have views of data from the last 24 hours for both streaming and readings tables in the database. The views are currently set up so we can see the recordtime, latitude, longitude, elevation, sensortype and sensor value for the streaming table. The readings table view displays the time, site, sector, spot, sensortype and value for the last 24 hours.

I also finished up the ER diagram for the data model as it now stands in our database with a couple of changes from the last iteration.

I began working on learning enough about flask this week to start adapting the viz tool to work with the current data model, and expect to make more headway on that this week. I also started writing up and documenting all our tables and why we chose to structure them the way we did, along with explanations for the relationships between the tables and how we chose those.

Building Device Housing and Bench Tools

with No Comments

Each device I am prototyping for bench or field work needs some kind of housing so that it can serve its purpose and/or stay operational in wet or dirty situations. There are three  I am focusing on now:

  1. Bench top flacon tube spinner. Originally inspired by this device that costs $100+, the bench spinner is necessary for measuring accurate pH of soil. Charlie and I have come up with a device that can spin up to 10 falcon tubes at a time and will be made from PVC pipe, epoxy, and potentially a lego  motor.
    Screen Shot 2016-03-23 at 10.49.19 AM
  2. Field sensor case. Modeled off of the 3D printed ‘flask’ design previously used in Iceland. The case will be larger to house a BLE shield and will have a hole with some kind of cover to protect the IR temp sensor (working with BLE). Right now I am trying to figure out if the soil moisture/temp probe (not working right now, may be toast) and the conductivity probe (working) really need to be out in the field or if the same data can be gathered on the bench. Once everything is working and in some kind of preliminary housing I can test that.IMG_1558
  3. NIR TI Nano housing. We have just purchased two quartz crystal coverslips that transmit NIR radiation. I want to build one of these into the housing for the nano so that the internal optics as well as the micro controller are protected. The coverslip is about the size and thickness of a quarter. It is important that it is completely flush with the Nano optics and that it can be lowered completely onto the sample so that the distance between the optics and the sample is not only minimized but uniform. Until the quartz slips arrive I have stalled testing and calibration of the nano to protect it.

 

Screen Shot 2016-03-28 at 10.12.43 AM

Screen Shot 2016-05-27 at 10.13.00 AM

 

Two other things that are temporarily on the back burner but are still important:

  1. The CO2 probe. If it works, this probe would allow us to measure CO2 respiration as a proxy for microbial biomass. Charlie had the excellent idea of attaching the probe to one end of a long PVC pipe full of soil, to increase our sample size and not disturb the microbes by digging up and mixing their soil habitat.
  2. The RGB Munsell color sensor. I have started experimenting with colors the RGB reads and have so far been a little disappointed by it’s accuracy. I will work on compensating for ambient light and optimizing the reading in one color space before using a linear transformation to convert those values to Munsell space.

Two final ideas are on the lowest priority in the ‘if I have time before we leave’ category. They are:

  1. Bench-top light-transmission organic content. A fancy name for shining light through a tube and measuring the thickness of an organic content band. This would be cool because it’s automated but it is easy to do visually and might even be more accurate, so it’s not a high priority.
  2. RGB pH strip measurements. This one is also easy to do by sight, it isn’t difficult to match the color of a pH strip to a key. However, once the Munsell color platform works, adding this functionality is simply a matter of writing the software, because the hardware setup is the same. For that reason it is more likely to be completed then the OC reader, which is somewhat design intensive.

Lab Notebook on Field Day

with No Comments

This week, I’ve done a lot of work on Field Day. It doesn’t seem like much but that’s because I kept getting caught up in tiny errors during coding. The biggest thing that I’ve added to Field Day is part of the Lab Notebook activity. We decided this week that integrating Google Drive is less important than being able to open local files. We decided this because we will not always have internet access in Iceland and that’s essential for Drive if we want to update our files. The Google Drive API for Android also has limited options. The Android API only allows the application to access files that were created by the application itself. Since most of our files are PDFs and the Android API doesn’t allow uploading, we decided to move away from that for now. There’s a Java Client Library for Google Drive which has many more capabilities that we will probably use when we go back to Google Drive.

I’ve begun working on the local documents viewer and have successfully downloading files and displayed the pdfs. There’s now an option to download files where the user provides the URL of the server where the files are and the path to the directory with all of the files in it. The code relies on the directory having indexing enabled. Field Day goes out and queries the directory and returns a list of files and folders. The user is asked to select which files to download. If a folder is selected Field Day goes out and queries that folder, etc. until the user hits ‘Download.’ The files are downloaded and stored on the external storage (the SD card) keeping the directory structure that Field Day saw in the remote directory. A list of files of is displayed on the screen to the user. Upon clicking a file, the file is displayed. Currently it works for only PDFs and JPEGs which are handled as Bitmaps. Text files are not. If a folder is clicked then the folder is opened and the files and/or folders in that folder are displayed.

NIR Reflectance

with No Comments

This past week I mostly worked on NIR, the field platform, and a little on the CO2 sensor.

I converted the collected absorbance data from the TI Nano to reflectance, which is better documented for organic content. I reran standards that are a mix of sandy soil with very little organic carbon and compost, which is very high in organic carbon. These are the results:

Screen Shot 2016-05-23 at 10.53.41 AM

For reference, this is a figure from Shuo Li et al. [1]

Screen Shot 2016-05-17 at 3.57.26 PM

 

I would really like if the nice peak between 1300-1400 was our organics peak, but I suspect we actually want to focus on the scraggly region out by 1700. My plan is to focus in on this region and increase the sampling rate of the device. The trend here is very promising.

The field platform prototype is coming along. I have incorporated the BLE and Field Day-friendly formatting, but I am getting readings that don’t make sense. This week I will focus more on hardware debugging.

The RGB platform is my next order of business. I am going to start with Munsell color. That will give me a good jumping off point for less crucial things like pH and NPK strip analysis.

[1] OrganicCarbonTibet

1 2 3 4 5