Taking the LiDAR out for a spin

with No Comments

We took the LiDAR out for a spin on Friday. We had a beautiful (read: high tech) rig that consisted of Neil driving, Kellan holding the LiDAR out the sunroof, and Nic spread out across that back seat with the drone (Kia), laptop, and router, collecting data from the LiDAR. We re-learned that Kia doesn’t like having a lot of metal around, so our plan to collect GPS data was challenged, more on why GPS data is important below.

This is us trying to get a good GPS signal on Kia, not us actually driving with Kia sitting on the roof…
Note: Kia is the drone not that we are in a KIA car

 

 

After a lot of indoors practice with the LiDAR, we determined that the best way to set up Kia when in Iceland is to have a balanced rig with weight evenly distributed to the sides. The elements that we have to consider are the LiDAR, a wifi transmitter as well as battery packs for both. Neil and Charlie are working on getting these built in a reliable prototype.

 

Our ultimate goal is to create point clouds from the LiDAR data based on GPS coordinates. During our test run on Friday we collected data but are still mapping this data based on time instead of GPS. Our plans for Monday are to pull the GPS data from our Friday test run from the Kia and align the GPS/telemetry data with the LiDAR data in a point cloud. This will allow us to correctly map the data in a point cloud. We can’t map the LiDAR data using time because the LiDAR spins faster than our time readings and time is linear and our flight most likely won’t be linear. If we were to create the point cloud using time, we would have a straight line where the data would be stretched out and repeated.

The next generation of next generation sequencers has arrived!

with No Comments

We have received, from Oxford Nanopore, the MinION. (Min-ION, not one-eyed yellow minions, get it straight.) This device promises whole-genome sequencing of microbes, viruses and archaea from soil samples we will collect at the Solheimajokull glacier. This device is extremely small and portable, and one sequencing run can be done on a laptop in 6-8 hours. One flow cell can run up to 12 samples at a time, and while the samples are being sequenced, internet access will allow simultaneous queries of a database that will determine what species were in our sample. Thus, in REAL TIME, we can identify the composition of the soil microbiome. This is a drastic improvement over prior methods, which involved sending 16s rRNA samples to a company and waiting for 2-3 weeks for results. 

What kind of science are we doing with this device?

We will be collecting soil samples at the Solheimajokull glacier (near the town of Vik in Iceland). We want to obtain samples that have been “out of the freezer” i.e. not covered by the glacier for a specific number of years – 10, 20, 50, 75, 100, 150, 200, etc. We will compare the microbial populations present in these different soil samples and look at the type of soil and flora growing in these different locations. This will teach us how the land “recovers” once it is freed from a glacier.

This device will help us do this analysis on site and give us the data almost immediately. It’s really new, cool and exciting. We’re looking forward to learning about soil succession with the use of this device!

 

Blogger: Emi (Biology Professor and MinION hand modeler)

Driving, of a sort

with No Comments

This is the first of a couple of short clips from the GoPro dashboard cam that we used in 2016 (we plan to again this year). The wheat-to-chaff ratio is pretty low, but the kernels you do find tend to be gems. In this clip we are driving down to the red sand beach on the West coast of Iceland. It’s near the end of a long day of driving from Akureyri, Nic is at the wheel with “support” from Erin, Deeksha, and the Talking Heads.

Catching-up

with No Comments

In some sense the amount of activity going on is inversely proportional to the posts we’re making here to document it… Here’s a quick summary of what we’ve been working on since my last post:

o LiDAR – We have new unit which is much more capable. We built a test-rig in Hopper to simulate having it on Kia, we’re just starting to collect data. Nic and Kellan are working on the algorithms/workflow to analyze it. Charlie is working on mounting it on Kia.
o Image processing – Kellan and Nic are working on a workflow for extracting features from the images. Charlie built a map (headed to QGIS) with marked known POIs on the Skalanes peninsula to use as part of the training data.
o Glacier forefield sampling – Tara and Charlie are developing a sampling plan and 16S RRNA workflow for processing the soil samples.
o Benchmarks for registering multi-modal data collected by sensors on the RPA.
o Discovered Siggi’s yoghurt.
o Kristin and Charlie are working on Field Day, mostly on the BLE plumbing.
o Checklist App – More news to follow.
o Gail and Charlie have done lots of logistics and planning.
o Emi is working on the avian surveys and the MinION workflow for processing glacier forefield soil samples
o New ambiance platform (single chip) is designed, construction to follow.

More regular updates to follow, we’re all psyched.

Where are we?

with No Comments

I learned this week the breadth of coordinate systems that are used to geolocate a “point” on the surface of the earth. A very nice archeological survey was done of the Skalanes peninsula about 10 years ago (as near as I can tell, it is in Icelandic). The locations of the features they catalog are given in eastings and northings from a point that I have not found documented yet. It appears to be UTM but when you use 28 W, the UTM grid zone for eastern Iceland you get kind-of-sort-of close, say within a couple of Km, but nowhere no exact. The survey team used a differential GPS, if we can obtain a copy of their database we should be able to figure-out how to convert from their coordinate system.

In other news Gail and I have finished making the hostel, ferry, and car reservations. Next up is exploring bulk airfare.

What do our computer eyes see?

with 1 Comment

Ive begun stepping into the world of machine learning algorithms, Kellan has been helping with some of the mathematical aspects which has made debugging a breeze. This post is going to be a bit image heavy so take it easy and look through some of the results!

This first two images are SIFT based implementations. With the Lupin plant targeted in the

Original:

The next is based on SURF and is targeting the rocks in the image. Parts of the bridge is ‘found’ but mostly due to the similarities in pixel colours. This can be worked out eventually.

We’re back…

with No Comments

After a bit of a break from blogging we’re back. Nic and I have been puddling along with image analysis, LiDAR and the ambiance platform over the past couple of months. Recently Kellan joined us working on machine learning/feature extraction (she is also a part of the group that will be traveling to Iceland with us). Gail and I have been busy on the logistics side, we have van & car reservations, ferry tickets to Heimaey, and a few of our hostel reservations done. We have also been working with Oli on the details related to Skalanes and Gummi (Colder Climates) about the glacier work and a hike to the Laki craters.

just a quick update for the blog

with No Comments

This is more of just a short blip for the blog as there is a lot to go over for the first research meeting in about two weeks!
As we move towards the end of the 2016 winter semester theres still a lot of work to be done. I started mainly with learning how to use QGIS, first the basics and then focusing on scripting it so that things happen automatically. What has been the focus more recently is how to get everything talking to one another. The way that I see it is, because of the infrastructure system there needs to be a middle program or script that is the control point. This control script is what will allow the drone to fly, produce images, and then the script takes over and within a matter of $TIME output the final result. Already we can stitch images together, get flight maps, geolocations, apply the image corrections for viewing in NIR, and a few other things.

Some set backs

with No Comments

This week has been a bit frustrating, but it all seems to be looking up now.

After taking some test images with the UAV last weekend I realised that there was image fragmenting going on with the camera. This meant that one of the cables was either lose, dirty, or damaged. So, after quite a few hours of hardware debugging (cleaning plus some cable re-seating, etc) Im reasonable certain the ribbon cable that transfers the actual image to the tablet receiver needs to be replaced. I still have no idea what caused the actual damage in the first place, but thats a little beside the point. Unfortunately that means until that comes in the camera will be out of order. (…Charlie ordered two just in case…)

BUT FORTUNATELY, that gave me an excuse to take the whole camera gimbal off which means its time to fly without the camera and stress-test how much of a payload it can carry! *cough* I mean, purely in the interest of knowing our payload total for the UAV+CAMERA+LIDAR.

In other news, after setting up a physical router I was able to solve the uplink problem. Essentially it turns out that having complete control over your ports when trying to control ~6 different java servers worth of information transferring all at the same time is really useful.

Super large backlog update

with No Comments

Okay, so a lot has happened over the last month!

Early this month I started to dive a bit into how we are going to stitch our images together. I approached this through OpenCV so that I have access to python libraries like numpy. At its core this is taking a directory of images as an input, detecting key points, matching descriptors and features as a vector, before processing them through homography and warping the images to fit. The results are PRETTY good right now, but could be better. Ive since spent quite a bit of time working with the code getting it to accept more images on the first run. skalanes_stitch_1-copy

The Earlham College poster fair was also earlier this month and I wrote up and designed the poster detailing some highlights in our research, a pdf of the poster can be found here –> uas-poster-2016.

I suggested that Vitali and Niraj use OpenCV for object detection of archaeological sites. Ive also started to look into the different ways in which things like animal activity and vegetation health might be done through computer vision as well. Interestingly enough, vegetation is not only easy for computer vision to find but to also get rough calculations of how nitrogen heavy the plants are.

Ive continued to work with UGCS’s open software packages. I recently got integration of custom map layers (i.e., GIS) to appear in the flight program. As the majority of the sdk comes undocumented it feels a lot like reverse engineering and takes more time than I would like.

This weekend I had the intentions to fly a programmed flight over our campus to make a high resolution map that Charlie had asked me to do. I’ve since run into networking problems though and cannot proceed further until I get them figured out. I cannot get an uplink to the UAV but can receive download and telemetry data. This is mostly a networking error I suspect but one thats continuing to confuse me.

Moving forward: First off my plan is to get the ground station working correctly again and this is my highest priority. Second off, while Niraj and Vitali handle the computer vision software I will start familiarising myself a bit more with QGIS.

1 3 4 5 6 7 8 9 21