And so, we landed!

with No Comments

Today marks our fist day here in Iceland.

Events started in a bit of a disarray with the airlines misplacing some important biology items and Charlie attempting new fashion trends,

but nevertheless after collecting our things from the baggage claim we soon found the members of our group who had arrived ahead of the main party. 

Dan was our fearless leader of the day and we organised into groups to get some errands done such as getting an emergency SIM card, going grocery shopping, and getting dry ice. At this point everyone was feeling the effects of tiredness, jet lag, and the every present brightness special to Iceland.

The day was far from over though, for our first real taste of Icelandic scenery the group went off to Þingvellir, a national shrine and home of the first parliament in existence, that lies directly next to the Almannagjá canyon which is made by the separation of the north American and European tectonic plates.
In such a short time, we have already been met by so much beauty here in the land of fire, ice, and trolls, we are all excited for what the next couple of weeks have in store for us.

What do our computer eyes see?

with 1 Comment

Ive begun stepping into the world of machine learning algorithms, Kellan has been helping with some of the mathematical aspects which has made debugging a breeze. This post is going to be a bit image heavy so take it easy and look through some of the results!

This first two images are SIFT based implementations. With the Lupin plant targeted in the

Original:

The next is based on SURF and is targeting the rocks in the image. Parts of the bridge is ‘found’ but mostly due to the similarities in pixel colours. This can be worked out eventually.

just a quick update for the blog

with No Comments

This is more of just a short blip for the blog as there is a lot to go over for the first research meeting in about two weeks!
As we move towards the end of the 2016 winter semester theres still a lot of work to be done. I started mainly with learning how to use QGIS, first the basics and then focusing on scripting it so that things happen automatically. What has been the focus more recently is how to get everything talking to one another. The way that I see it is, because of the infrastructure system there needs to be a middle program or script that is the control point. This control script is what will allow the drone to fly, produce images, and then the script takes over and within a matter of $TIME output the final result. Already we can stitch images together, get flight maps, geolocations, apply the image corrections for viewing in NIR, and a few other things.

Some set backs

with No Comments

This week has been a bit frustrating, but it all seems to be looking up now.

After taking some test images with the UAV last weekend I realised that there was image fragmenting going on with the camera. This meant that one of the cables was either lose, dirty, or damaged. So, after quite a few hours of hardware debugging (cleaning plus some cable re-seating, etc) Im reasonable certain the ribbon cable that transfers the actual image to the tablet receiver needs to be replaced. I still have no idea what caused the actual damage in the first place, but thats a little beside the point. Unfortunately that means until that comes in the camera will be out of order. (…Charlie ordered two just in case…)

BUT FORTUNATELY, that gave me an excuse to take the whole camera gimbal off which means its time to fly without the camera and stress-test how much of a payload it can carry! *cough* I mean, purely in the interest of knowing our payload total for the UAV+CAMERA+LIDAR.

In other news, after setting up a physical router I was able to solve the uplink problem. Essentially it turns out that having complete control over your ports when trying to control ~6 different java servers worth of information transferring all at the same time is really useful.

Super large backlog update

with No Comments

Okay, so a lot has happened over the last month!

Early this month I started to dive a bit into how we are going to stitch our images together. I approached this through OpenCV so that I have access to python libraries like numpy. At its core this is taking a directory of images as an input, detecting key points, matching descriptors and features as a vector, before processing them through homography and warping the images to fit. The results are PRETTY good right now, but could be better. Ive since spent quite a bit of time working with the code getting it to accept more images on the first run. skalanes_stitch_1-copy

The Earlham College poster fair was also earlier this month and I wrote up and designed the poster detailing some highlights in our research, a pdf of the poster can be found here –> uas-poster-2016.

I suggested that Vitali and Niraj use OpenCV for object detection of archaeological sites. Ive also started to look into the different ways in which things like animal activity and vegetation health might be done through computer vision as well. Interestingly enough, vegetation is not only easy for computer vision to find but to also get rough calculations of how nitrogen heavy the plants are.

Ive continued to work with UGCS’s open software packages. I recently got integration of custom map layers (i.e., GIS) to appear in the flight program. As the majority of the sdk comes undocumented it feels a lot like reverse engineering and takes more time than I would like.

This weekend I had the intentions to fly a programmed flight over our campus to make a high resolution map that Charlie had asked me to do. I’ve since run into networking problems though and cannot proceed further until I get them figured out. I cannot get an uplink to the UAV but can receive download and telemetry data. This is mostly a networking error I suspect but one thats continuing to confuse me.

Moving forward: First off my plan is to get the ground station working correctly again and this is my highest priority. Second off, while Niraj and Vitali handle the computer vision software I will start familiarising myself a bit more with QGIS.

I guess this is why they call it research

with No Comments

Starting off, this week has been spent reading and studying up on the current tech thats useful for us. The pun is fully intended in the title of this as this is maybe the 5th time I’ve had to look at all the new technology and advances made (damn you Moore’s law).
Looking into a more advanced LiDAR system had me sifting through nearly all viable models that fit our budget. Sample rate, beam power, light shielding, and mass were a few of the deciding factors that went into looking at different models.

  • A high sample rate allows for the LiDAR to be traveling faster and not suffer a loss of resolution.
  • The power output of the laser beam itself is important to get greater distant, our original system only had a usable distance of ~1.5m above the ground in broad daylight.
  • light shielding, though it can be added later, is best done built in as close to the laser output as possible. The shield itself allows through only the recognisable light of the laser receiver and blocks out as much UV light as possible.
  • Because the goal is to have the LiDAR mounted on the UAS mass needs to be taking into consideration.

After making up a list and handing it over to Charlie to look over he submitted it to the scientific equipment fund.

I also took the time this week to put together a new abstract for our research. Since first starting on the research our knowledge and capabilities have expanded drastically and I felt our abstract should reflect this. As of today we submitted to be a part of the Earlham College “Natural Sciences Division Undergraduate Research Poster Conference.”

Back into the swing of things

with No Comments

Now a few weeks into the new semester its time to stop procrastinating and get back to work!

Ive started off by essentially doing what seems like mass updating of hardware, software, firmware, etc. The DJI drone system and ground control software both had to be brought up to date. Initially there were a lot of errors in getting them talking to one another again but as of Wednesday this week they are fully happy and working properly. I’ve also taken the chance to update the plug-ins and various parts of the website. Ive also made accounts on the blog for Vitalii and Niraj who will be joining us by working on some of the back end coding.

In an effort to stream line our massive photo cache I’ve copied all photos onto our media server here on campus to allow us to situate things. After talking with Charlie and Erin we have decided to sort initially by day and then by capture device. This means though that we have to map all photos to their correct day which has proved to be more difficult then I thought… If the EXIF data on the images doesn’t contain a date – the date is just set to be the most recent “creation date” which means when I copied them. Thats left me and Erin with a whole lot of images that need to be sorted and sifted by hand still. Im also still finding places where images are stored, such as the google drive, etc, and have been working on getting all of those to the server as well. Its sort of like a  big game of cat and mouse and is tedious at best.

With Niraja and Vitalii joining us I have also been doing my best in catching them up to speed on all the projects and details there in. Much of this involves working with the UAV images and how to post process them. Much of this is trying it out on proprietary software and then figuring out to best do it in large batches and with open source programs. Ive most recently been trying to figure out if its possible to get a 3D topographic image without needing to apply a LIDAR point cloud layer. While not as accurate it would certainly be quick and would be used more as an approximation.

As time becomes available, Ive also started going through all the video footage that we have saved and started to throw together a very (VERY) rough storyboard for the documentary. Im hoping to have a solid chunk of time this weekend to work on this a bit more. Really I’m just focused on getting some generic shots worked together so that we can use them as filler for voice over and context.

As things move along I’ll keep posting! Cheers!

Blog 7/5/16: Drones, LiDAR, dirt

with No Comments

This is now our 5th day at Skalanes and were are all feeling rather settled in and have our workflows… er… flowing. We were awoken to a bit of surprise today, the tent was actually warm! This is the first day of being in Iceland where a jacket wasnt necessary without having to hike a volcano. With hardly a cloud in the sky the weather was perfect for taking arial images with the UAVs. Oli, Rannveig, and Bjarki took some of out (at different times) to explain the archeological dig sites and what areas would be helpful to have high resolution mappings of the area. I did an initial fly over survey of the known archaeological site. Later is post processing I started to look into tweaking the photos to highlight ground structures, cancel out water reflections, etc. Its all looking promising. Unfortunately, its so bright that the sun would mess with the LiDAR’s receiver and so I will have to wait for the sun to “set” before going out. This will probably be around midnight. Tara continues to work with the soil samples taken and devising ways to process the soil.

visualisation and flight

with 1 Comment

This past week has been a busy week. I spent a great deal of time working with Erin on the UAV Lundi where we were able to do test flights in fairly heavy wind, get used to the camera system, learn how to stitch images together. I got a chance to look into some of the customisable flight controls such as “point of interest”, “follow me”, and “waypoints.” The waypoints is particular interesting as It should allow us to define flight plans, save them, and then have Lundi fly them in the same way on its own. Most of the custom commands are going to come from a “ground control station” which is just an input form a laptop. I spent time looking into open source ground stations that have already been created but didn’t find anything exactly like I wanted. In order to remedy this Ive started coding up a few things using the DJI api to either create our own standalone ground station or add it into an open system. The goal is to have the ability to click waypoints on a map and have the drone fly the path with user defined specifications.

I’ll update shortly about LiDAR visualisation and information. Im working on creating a nice image and detailed instructions on whats going on!

 

Tests of light detection and ranging

with No Comments

Charlie and I had a conversation at the beginning of the week about if the LiDAR module we are using is the “right” one for our purposes. The bottom line is that we don’t exactly know and so I set out to answer that question.

IMG_4301-1IMG_4300

I built a controlled slider that gives us nearly 20cm of movement that we have control over when taking LiDAR measurements. My first test was to place a pencil on the ground and take 1minute of scans at each centimetre. The hope was to be able to pick out the pencil given “perfect” circumstances. After collecting the scan data, which consisted of only the distance in mm and the theta (degree angle at which the scan was take), I then artificially created a Y value corresponding to the 20cm of data taken.

After I got the data into a pleasing form I began to try and visualise it. This proved to be a tad bit difficult and I still don’t have it quite right yet but I was able to see something;

Screen Shot 2016-05-22 at 10.13.41 PM

I am fairly certain that the object towards the top of that screenshot is one of the legs on the tripod that the beam picked up. I am not entirely sure though and will continue to do tests until something is conclusive.

1 2 3