Applications open for a new conference on open tech in ag – GOAT

Applications open for a new conference on open tech in ag – GOAT

I already get it – take me to the conference page or heck just let me apply now!  Wanting more info… Read on!

Why another conference?

Information about our food system should be public or easy-to-share. Unfortunately, that is often not the case. Food is currently produced by a mix of private and public entities, and information about our food system can be opaque, hard-to-find, or proprietary and farmers may have limited control of the on-farm data they generate or ability to improve the tools they use.

Agricultural startups are often venture-funded, with interest veering toward capitalizing on farm data. Controlling such data, not just machines or sensors, is considered the most valuable game in town (at least in the short term). Closed data ecosystems hinder our ability to produce food equitably and sustainably and support farm level decision support.

Fortunately, there is also significant interest in creating open source hardware and software to increase transparency in the food chain, allow for data sharing among groups, and engage the public and make the benefits of shared data available to all. Though the number of projects is growing, they tend to be small, isolated within universities or small companies, and disconnected from one another. The result is duplication of efforts, hard to find projects, and disconnected parts producing incompatible data.  The lack of coordination means that as technology rapidly changes, closed-source companies are locking up the machinery, sensors, data, and varieties of the future.

Gathering for Open Ag Technology (GOAT)

We hope to to coalesce developers and users of open ag technology around a single and clear vision:

The technologies that produce our food and the data about our food system should be public, and enable control by the farms and farmers that produce it.

Together, we can collectively address the problems which prevent the creation of advanced, high quality open technology and its adoption.

  • Coordinate existing development – there are many organizations working on open ag software and hardware today with overlapping interests. Let’s get together! Let’s talk about building software together BEFORE we spend time and money, not after!
  • Invite new development – Let’s make a place where newcomers can easily find developers and users with overlapping experiences. People evaluating open vs closed ag software/hardware should find a thriving, engaged community which can make their work easier, faster, and better.
  • Aligning technology and users – Finding people to try, test, break, and give feedback on new technology is hard but critically important to making technology relevant and useful. By bringing users and developers together with open lines of communication, we can increase the utility of open technology in ag across the board.

In addition, we will create an Open Ag Development Roadmap – If we collectively have a plan, then we can all contribute more effectively. Let’s reduce duplication of efforts, focus on common areas we all need, and put the effort of contributors into a broader context for funders so they can see the big picture.

Conference information

May 7-9 2018 at the Omega Institute in Rhinebeck NY.  Applications are open now, but will close in late March so apply soon.  There is money for participants who need funding – so please apply regardless of your financial status!

So – if you also believe the technologies that produce our food and the data about our food system should be public, please apply now.


🐐 The GOAT organizing team (Alix, Ankita, Chris, Dan, Dave, Don, Dorn, Greg, and Mike)

Posted by gbathree in Blog Posts, Open Tech Development
An Open Strategy to build Soil Carbon  part 1

An Open Strategy to build Soil Carbon part 1

Why soil Carbon matters

Soils are the largest terrestrial pool of organic carbon on the planet, holding as much as 3,000 Gt of carbon in the upper 2 meters of soil. To put that into perspective, the total amount of carbon in the atmosphere is currently about 870 Gt and annual CO2-C emissions from fossil fuels equal about 9.7 Gt per year. Furthermore, it is estimated that human activity such as converting land from native vegetation to grazing and cropland, has transferred 133 Gt of carbon from the upper 2 m of soil to the atmosphere.  Sequestering carbon in the soil is the largest potential carbon sink, however…

… this is about more than climate change!

The soil beneath our feet provides innumerable ecosystem services that directly affect our lives:

  • Soils are a warehouse, storing the critical plant nutrients needed to grow the food we eat and the forests we enjoy.
  • Soils manages hydrological cycles, regulating drainage, flow and storage of water; playing a critical role in the recharge of groundwater.
  • Soils support biodiversity by providing a habitat for and supporting the growth of a variety of plants, animals and microbes. Soils protect water and air quality by filtering toxic compounds and excess nutrients.
  • Soils provide physical stability and support for anchoring plants, withstanding erosive forces, allowing the passage of air and water and serving as the foundation for human structures.

Soil organic carbon (SOC), which is the fraction of soil carbon derived from historical vegetative cover and biological activity, is critical to soils ability to provide these ecosystem services. Physically, SOC acts as a glue, binding soil particles together for the best structure to retain water and resist compaction. Chemically, SOC retains critical nutrients and makes them available to plants. Biologically, SOC sustains soil microorganisms critical to nutrient cycling and protecting roots from diseases and parasites.

Can soils shift from a source to a sink for atmospheric carbon?

Currently, land managers can receive carbon offsets for not converting grasslands and woodlands under native vegetation into cropland, preventing potential future carbon emissions. However, there are many ‘carbon negative’ agricultural management practices that also have the potential to sequester atmospheric carbon (see table below). Detailed analysis of these land management strategies suggest that any one of these methods could sequester between 0.1 and 1 ton C ha-1 yr-1. At the farm level, this may seem like an insignificant amount of carbon. Taken globally, however, there is the potential to sequester as much as 8 Gt of carbon per year in agricultural lands, enough to offset as much as 80% of fossil fuel emissions each year.


How to promote soil carbon sequestration?

Using soils as a sink for atmospheric carbon will require a dramatic shift in how we view and manage the world’s soils. The process of terrestrial sequestration is a slow, long term proposition. Stimulating the national or global action required to achieve the sequestration outlined above is a challenge that will require the coordination of policy makers, institutions, farmers groups and carbon emitters.

Providing the incentive to increase adoption of carbon negative management practices can follow multiple tracks. One is a policy of subsidies to reward those farmers who adopt carbon negative practices. However, the government must update its subsidies program based on innovations in new sequestration strategies. Historically, the government cannot successfully pick winners in a space with lots of innovation.

Another option is a “Carbon Tax”: make carbon emitters pay a tax for each ton of carbon produced. While simple and efficient, a carbon tax is a political nonstarter. In either of these cases, the government effectively sets the price for carbon sequestration and bears the burden of additional administrative costs to manage subsidy or taxation systems.

As such, carbon markets have become political and economic middle ground. They provide financial incentives for land managers who sequester carbon in soils, are “market based”, and are relatively flexible to changing technology. Carbon markets include ‘compliance markets’ and ‘voluntary markets.’ Compliance markets are formal cap-and-trade markets where government agencies decide what carbon offsets are allowed, and they often set the carbon pricing as well. Similar to subsidies and taxation above, formal cap-and-trade markets place a large administrative burden on public institutions. Conversely, voluntary markets generally adhere to the standards developed by a number of voluntary standard-setting bodies.

Choosing carbon markets

Any attempt to develop meaningful subsidies and taxation policies would require significant buy-in from politicians in Washington D.C., something that does not seem possible under the current political climate. Likewise, the development of national level compliance markets within the USA would appear to be years away at best, although some regional compliance markets do exist within the USA. This leaves voluntary markets as the most promising entry point for carbon negative land management to provide carbon offsets.

Voluntary carbon markets are much smaller than compliance markets, trading $278 million worth of carbon compared to over $50 billion on compliance markets.4,5 Likewise, the volume of carbon traded is much less than on compliance markets, 84 Mt compared to 6 Gt in 2015.

Despite their smaller size, voluntary markets can be a good space to test carbon negative farming offsets. Since they are governed by voluntary standard bodies, and not by government agencies, they are more willing to develop and test new pathways. Purchasers of carbon offsets in these markets tend to be large companies looking to reduce their carbon footprint and prepared for future carbon regulations.  These companies tend to be forward thinking and willing to experiment.


Developing pathways for carbon offsets via soil sequestration will require a concerted effort to overcome technical challenges. These include the lack of existing pathways to soil carbon sequestration credits, the difficulty of model based SOC predictions and the lack of affordable methods for accurately measuring SOC.

There are no accepted pathways for sequestering new soil carbon

Under the current regulatory framework guiding both compliance and voluntary markets, there are no accepted pathways to sequester ‘new’ carbon in soil. There is a pathway to prevent the release of existing SOC into the atmosphere by stopping the conversion of native grassland or forest into agricultural production. While this is an important step in reducing greenhouse gas emissions, it does not lead to new, carbon negative innovations nor is its potential impact very high. The amount of US land that could benefit from this type of carbon credit is very low with only 400,000 acres of US grasslands and woodlands being converted to agriculture in 2011/12.

There are two critical factors which prevent the establishment of soil carbon sequestration pathways. First, there are concerns about the potential reversibility of carbon sequestered in soils. For example, a manager builds up soil carbon using no-till management, sells carbon offsets based on that SOC build-up, and then plows his land, releasing a portion of that carbon back into the atmosphere. This is an issue that can be resolved by maintaining financial incentives. The second factor is that the process of verifying soil carbon levels, either through predictive models or direct soil measurements, is very expensive, leading to very high transaction costs. Without technologies to overcome this challenge, the transactions costs of soil carbon offsets would be prohibitively expensive.

The Problems with Model Based Verification

Soil organic carbon dynamics at the landscape level are very complex, with spatial variability causing significant verification challenges. The current approach to overcome this challenge is with ever-increasingly complex SOC models. Often, these models require a combination of direct measurements of SOC, peer-reviewed and project specific parameterization, and frequent auditing to develop accurate predictions. The result is an overly complex methodology, such as the 80 page handbook for verifying carbon credits from not converting grass and woodlands to agricultural lands. This approach leads to high transaction costs which, when combined with the relatively low carbon price, fails to provide incentive for land managers and does not support an innovate atmosphere for new sequestration methods.

Direct Measurement of Soil Organic C

Directly measuring SOC stocks currently requires laboratory-based methods, such as gas chromatography and elemental analysis. While they are highly accurate, they are also time consuming and expensive. High costs limit both the frequency and area of sampling, making it harder to quantify stocks of SOC on a farm or ranch scale. As a result, current soil inventory methods lack the spatial and temporal resolution needed to accurately quantify SOC stocks across large scales and to adequately detect change over time.  Some companies and academics are attempting to build libraries to address this variation, their efforts are usually not public or not collaborative by nature.

We are building open software and hardware so anyone, anywhere can predict soil organic carbon.

In part 2 of this post, we’ll talk about the approach we and our partners are pursuing to directly measure soil carbon.  Direct measurement will result in a more innovative ecosystem for land managers and policy makers alike to estimate soil carbon.  We’ll also walk through the hardware and software that exists to make this happen and why open hardware/software/data the key to long-term success.

Posted by gbathree in Blog Posts, Soil
The future of fresh produce: a skeptical optimist’s view

The future of fresh produce: a skeptical optimist’s view

Imagine if consumers and farmers could measure the nutrient density of fresh produce on farms and in stores in seconds. Consumers would demand nutrient-dense produce because they could see it empirically at the point of sale. Farmers would get a premium for more nutritious crops. Higher prices would motivate farmers to develop farming practices that increase nutrient density in addition to yield. The result? A market-driven, sustainable way to improve the health of millions of people.

The Bionutrient Food Association is trying to make this happen, and Our Sci is going to help.

If you’re an informed foodie/techie, your probably rolling your eyes. There are lots of real problems with this utopic-sounding plan. Well, you’re right… but Our Sci wouldn’t take this on if we didn’t think it was possible, so stick with me.

Here’s the plan…

Build a device to measure nutrient density

Reality check — there is no device capable of measuring every nutrition-related compound of interest. Doesn’t exist. Instead, our strategy is to correlate reflectance data in the UV/VIS/NIR spectra to broad classes of nutrients using standard lab methods (US Pharmacopae, USDA, ASME, etc.). NIR reflectance instruments, like ScIO, have built a lot of hype and failed to deliver… literally.  The concept of building correlations between UV/VIS/NIR and reference data not new. Examples include estimating total carbon in soil (both us and others), total cannabinoids in marijuana, dietary fiber and other compounds in fresh produce, and drug identification in pills.

What we’re attempting is one step beyond those examples — nutrients in fresh produce are present in very small quantities, and their relationship to UV/VIS/NIR will be more complex than the aforementioned examples. Proof of concept work will determine the level of granularity and breadth of compounds that we can predict. Arguments about the tech would take a post of its own and I’m sure I’ll write it at some point… but for now let’s assume we can crack that nut. Next problem: spectral data can’t actually ‘see’ the compounds of interest, so this magic only works if you have a sufficiently large and detailed reference database. Sending a sample to a lab to measure a relatively small number of compounds costs 100s of dollars per sample, so building that reference dataset is no small feat. Enter part 2 of the plan…

Run a national survey of nutrient density in stores and on farms

That’s going to be pricey! We’re working on strategies to generate revenue from collecting the reference data, and lowering the cost of collecting it. We think that even without a device, an effectively designed survey would produce a dataset which could help direct purchasing decisions today. There are many organizations and even individuals interested in making this data publicly available and we are pursuing them as funding partners.

To lower costs, the Bionutrient Food Association’s membership base can help collect the samples. Also, we’ve partnered with Health Research Institute, where we can poach (with permission of course!) incoming samples from other projects/clients and collect measurements using our device alongside their lab measurements.

Make it a movement, not a product

Measuring nutrient density willy-nilly with no feedback mechanism to farmers will not change the food system. We need to establish a conduit between farms and consumers so nutrient density information is traceable. We also need to allow researchers (from academia, industry, and engaged citizens) to identify and share insights from the data. Furthermore, we can help everyone in the system self-organize experiments to test harder problems. Sure — you can mine consumer data to figure out which farms are making the most nutrient-dense tomato. But what if you want to know how tomatoes impact heart disease? Then you need to be able to organize an experiment and invite consumers to join, collaborate, and communicate over time.

Important experiments should be run in the real world, with real people. Well designed collaboration software and public data (with reasonable guards in place for privacy) make those interactions easier and more likely to happen.

Still skeptical?

Ok, one last pitch: even if UV/VIS/NIR reflectance doesn’t work today, some day a new technology will be able to predict the nutrient content of food, easily, accurately, and cheaply. When that day comes, companies will sell you the device. The data streams will be closed, and mined for insights sold to the highest bidder. Researchers will have to pay to use the data (the public won’t see it at all), slowing the pace of learning about how food nutrition impacts human health. The best insights will be kept by the companies, patented, and turned into products (super-food extracts or new drugs or whatever) and sell back to us at 100x markups. It’s not a dystopia — it’s reality. Think I’m overstating it? It’s already happening to your social data. Consumer Physics, the company behind the ScIO which delivered wildly late and completely overstated what the device could do, is now under a patent dispute. Yay, progress… for lawyers, at least.

So more than anything else, this collaboration is about getting ahead of the problem.

Let’s put our flag in the ground: information and technology relating to our food supply should be a public good.

The full campaign, details and project plan are available at the Real Food Campaign section of Go follow them on Facebook and Twitter. You can read more from me and other Our Sci folks at

Posted by gbathree in Blog Posts, Nutrition
Reflectometer progress: circuit board

Reflectometer progress: circuit board

In case you missed the last post, we are building a reflectometer.  The goal is a simple to use, low cost, flexible device for a variety of measurements including tree canopy, soil carbon, and food nutrient density.

If you’re curious about the development process, IRNAS produced a great post walking through the steps from understanding the application to scaled manufacturing.  We are in the rapid prototyping phase, though because we are basing the design on work we already used in the PhotosynQ project, I expect it will be more rapid than usual.  Here’s some updates.

Design Criteria

In order to build a generalized design which can be used in several applications, we tried to build a device which could measure several types of objects effectively.  In order to measure reflectance effectively, the background behind the object of interest must be consistent – variation n the background will produce noise in the signal.  The mechanical design addressed this (see image below).  Here’s the list of materials this device is designed to measure.

handheld reflectometer

Drop of liquid – drop from crushed leaf, drop from crushed food. refraction, brix.
Small cuvette of liquid – juice, chemical mixture. Reflectance at all wavelengths, density, colorimetry
Bulk solid – soil.  Soil carbon.
2D object – leaf, paper. Chlorophyll content
3D object – fruit, vegetable. Reflectance at all wavelengths, correlated to nutritional content (This use cannot utilize a consistent backdrop due to size and cannot require a guaranteed distance to sample)

In many projects, signal quality can be defined from the start.  However, here it’s hard to say if we need less than .5% noise or less than .05% noise without simpling testing on actual samples.  So, we will have a reference set of objects (colored liquids, bulk solids, colored paper or different thickness, etc.) to confirm that we are hitting the quality needed as we iterate on the design.  We are building that reference set now, so when we have our first prototype we will be ready to put it through it’s paces!

The next step was to produce a schematic.  That’s just a fancy word for a wiring diagram – which parts connect to which other parts.  We use a program called KiCAD, which is free and open source and amazing.  Here’s a picture of our schematic for the reflectometer.

reflectometer schematic

Once that’s complete, then you actually have to wire it up.  Here’s what the actual circuit board looks like.

reflectometer layout

You then send this layout to a company which manufactures the board.  Next, someone has to put all the parts of the board – in our case we have about 200 components that need to be soldered onto the board.  Until recently, that’s been the slowest part of rapid prototyping, and causes lots of delays because hand soldering producers errors and takes time and having an outside contractor do it took too much back and forth and time.  But now companies provide fast turnaround (2 weeks!) on fully populated boards, and have huge stock inventories so they don’t often have to order your parts at all – they are all already in house.  That means you can prototype faster and cheaper than ever before.

We are less than a week away from shipping this design out to be routed and populated.  A few weeks after that we should have boards in hand, ready to test against our standards.  If we find we exceed our standards, then we can begin to identify places for cost savings, hopefully getting the board into the $50 in parts range.

If you want to download the schematic, you can find it on our gitlab page –  More updates in a few weeks.


Posted by gbathree in Open Tech Development
We are building a open source reflectometer… and here’s why

We are building a open source reflectometer… and here’s why

“But wait,” you say, “there are already some out there, and they are pretty well designed and reasonably priced!”  Well, yes – there are full spectrometers like the Spectruino ($411), the Open Source Colorometer ($80 + $20 per LED) from IORodeo, and publications from universities describing open colorometer designs (Appropedia and MTU have a good one, but there are several others – these are DIY so < $100 in parts).  Pretty cheap, and lots of available designs.

Like the seat designed for the average person but usable by no one, product designers should avoid the law of averages.  As in that case, the aforementioned devices are too general purpose to be particularly useful.  The MultispeQ ($600) could work, but was designed for photosynthesis measurements and is over-designed for applications outside of photosynthesis.  For our community partners, none of these devices do exactly they need, which is…

Arborists need a low cost and easy to use chlorophyll meter to add more rigorous sensor data to visual tree assessments.

Consumers + farmers need a way to measure food nutrient density in stores and on farms.

Soil scientists and regulators need to measure soil carbon in the field, quickly and easily.  Doing so could create a massive new pathway for carbon markets to value sequestration of carbon in soil.

Cannabis growers, consumers, and dispensaries need to be able to confirm total cannabanoids and THC levels to comply with regulations, ensure quality product, and identify fraudsters.

These cases require a device which is low cost, easy to use by non-scientists, flexible in what they measure (drops of liquids, cuvettes, leaves, aggregate solids like soil, and whole solids like a pear), usable in field conditions, fast, and open source.  Reflectance is a pretty simple measurement and tells you almost nothing without reference data.  Reference data measures reflectance values and validated lab-based measurements on the same set of samples to build correlations between the two (if they exist!).  But building a reference database can be very expensive.  In the case of food nutrition, measuring a small suite of lab tests for vitamins, minerals and antioxidents can cost $500 or more.  A reference database might contain 100s or 1000s of measurements to have sufficient predictive power.  Yikes!  Expect more on solving that problem in a future post… but for now let’s just get an update on the reflectometer.

Pictures and specs

FYI – We are in the initial stages of design, so everything is in flux and I know this is ugly looking.  Sharing too much too early is in our DNA, sorry 🙂

Our core design is based on the open source MultispeQ (a photosynthesis measurement device), which uses LED supplied light sources at 10 different wavelengths, but is much lower cost.  While this isn’t a full spectrometer, it has the advantage of working independent of ambient light (unlike a normal spectrometer or simple colorimeter where the sample must be in darkness) while being relatively inexpensive (cheaper BOM and less time/cost to make/calibrate).

Ideally, we want users to be able to measure soil carbon, leaf chlorophyll content, brix from extracted sap, and the density of a pear fruit all at the same time with the same instrument.  This not only reduces the cost and increases utility, it also spreads our development costs across multiple applications.  The above design accommodates all of these uses.

This includes a digital tape measure, kind of like the Bagel.  As we validate that design more I’ll post more details.

Here is the link to the 3D design files on OnShape  The hardware and firmware files will be at the Our-Sci Gitlabs page here  It’s a work in progress, so expect to see frequent changes over the next few months.  Much of the hardware, software and firmware has already been tested and validated, so we hope is to have a prototype device ready in only a few months.

We’ll keep the updates going between now and then, so stay tuned or sign up for email updates in the footer of this page.

Posted by gbathree
Agriculture needs an open solution stack

Agriculture needs an open solution stack

Farmers may not know about FOSS, but they know when they’re getting the short end of the stick.

There’s lots of talk about big data and tech in agriculture… and it falls into two main camps – there’s the ‘machines replace farmers’ camp, ranging from automated tractors, to automated home gardening, to home food computers.  These projects are interesting and full of promises but require something of a revolution to actually scale.  Then there’s the ‘farmers buy my black box service’ camp, a multi-billion dollar industry already providing tech services to mostly large farms – Monsanto, John Deere, DuPont all have platforms for collecting data and providing real time feedback (precision-ag, as it’s called), and there’s a huge number of startups working in the same space.  These services tend to be expensive and extremely closed (though some effort is going into creating common APIs at least…), and despite all the hype they are pretty underused on most actual farms.  Worse yet, most of those little startups want to get bought by the big guys… so the future of that space is one of consolidation and monopoly, like many other parts of the ag industry.

Both camps leave typical farmers scratching their heads…  either I’m irrelevant, or I need to pay many thousands of dollars per year for software I’m not sure I need?  Why is [insert big ag company’s name here] taking my data, repackaging it, and selling it back to me in the form of ‘precision ag’?  As a small to medium sized farmer, why can’t I find technology that’s useful but also affordable?  As the son of a farmer, I know that farmers are practical people – if it helps their bottom line, they’ll usually do it, but it doesn’t mean they like it.

Solution stack: “In computing, a solution stack or software stack is a set of software subsystems or components needed to create a complete platform such that no additional software is needed to support applications.” – Wikipedia

I think it’s time we invest in an open solution stack for agriculture.  Platforms and software that can deliver value to farmers today, at a reasonable cost, in a competitive ecosystem that can produce enough value for companies to succeed without fleecing farmers to pay Venture Capital firms and the bottomless stomachs of investors.  That’s not to say a healthy ecosystem of closed and open technology won’t continue to exist, but there is definitely room for alternatives.  There are good analogies here to other software stacks, like LAMP, which is still used in most websites on the internet.  LAMP allowed the web to grow faster, at less cost, with more flexibility and ultimately created more options for the end user, yet closed competitors continue to exist and fill specific market demands.

So… what functionality do we need in an open ag solution stack?

  1. Get data from sensors + the environment.  APIs to connect to existing sensors.  Access to APIs which output weather data, soil data, market information, accounting info, etc.
  2. Get data from humans.  A mobile app which can collect data from farmers, farm workers, accountants, etc. about what’s going on in the real world.
  3. View the farm and the business.  Farmers need to see maps of fields, get updates in real time of activities, get reminders about what’s next and where, etc.
  4. Get analysis and feedback.  Take inputs, run a model, generate (push) outputs.  Maybe catch a pest outbreak before it destroys your field.  Maybe pick the best time to sell your wheat.  Maybe wait a week to fertilize to avoid a rainstorm on Thursday.  That kind of thing.

These are the very basic components – like any ERP it could include so much more, or so much less depending on what the user wants.  By making a base layer of functionality available and low cost, we can move the business opportunities up the chain to services built on top of or connected to the stack.  Pay for Quickbooks to do your taxes, but use their API to integrate your financial data into your farm decision-making.  Pay Precision Hawk to do drone flyovers of your field to improve the quality of your maps, and integrate the maps into the open ag stack.  Pay an agronomist for consulting, but integrate their crop models to get real-time feedback through the open ag stack to increase accuracy and save you (and the agronomist) time.  This allows more efficient and higher quality code as many companies are invested in the same code base.

Another benefit of an open ag stack is the ability to share data.  New analytical tools can generate real value from shared data, but the current options for farms is either to forfeit their data to large companies through closed platforms (exchange data for a service model), or to keep their data completely isolated (exchange money for privacy model).  The first represents a loss of control and of value to the farmer.  The second fails to benefit from shared resources.  An open platform could allow a more flexible, middle option, where data is optionally shared fully, anonymized, or kept private, and the shared or anonymized data is accessible to all.  This would be a boon to researchers to create new methods or identify trends, and to farmers to improve decision-making.

There is some movement already towards an open solution stack. The GODAN initiative is supporting data sharing in agriculture and nutrition by improving data accessibility and collaboration between governments and NGOs.  This could lead to standardizing ag data to improve interoperability between different sensors and software.

FarmOS is a drupal based farm management platform, which emerged from the Farm Hack network.  Now used on over 200 farms across the US, FarmOS is expanding its capabilities and Michael Stenta, the main developer, is committing more time to the project.  Our-Sci is a startup which is using the software framework developed for the PhotosynQ project at Michigan State University.  Our-Sci is a research framework for data collection, sharing, and analysis.  It’s effective for creating and standardizing new methods, and developing feedback based on sensor and survey data.

But a great deal more is needed, as well as a coherent plan for development in the future.  The more organized our development, the easier for developers to contribute helpful code and the more usable a solution for farmers.  If you are interested in contributing to the creation of this open ag stack, please contact us to get involved.


Posted by gbathree in Blog Posts, Other Applications
People-led Research: A strange, sleeping giant

People-led Research: A strange, sleeping giant

The sleeping giant, over the Lumponian Homeworld, from Legends of Zita SpaceGirl

People-led research is a sleeping giant, and it might be waking up.  At least, people have been working hard at waking it up.  There are now collaboration platforms for everything including data collection, analysis, product design and development, image identification… the list goes on and on.  Citizen Science projects are booming, and SciStarter is making them easier to find and join.  Cell phones make collaborative data collection easier, and Open Data Kit reduces app development costs for survey-based projects.  Public Lab has pioneered community led technology development, establishing best practices for collaboration online and in person.  And the list goes on.

Slowly but surely, we’re moving to a new reality, much of it driven by technology.   Sooner than we think, we’ll be in a world where sensors are ubiquitous, comparable, of scientific quality, and in the hands of everyday people.  Data sharing will become the norm, so large reference datasets can be built to make the data meaningful.  Contributors will be validated over time, similar to ebay’s buyer/seller ratings system, to improve confidence in the data and establish invested communities.  Satellite, weather, air quality, and related data will be public and accessible anywhere, resulting in near real-time environmental monitoring of every location on the planet.  And the barriers to our scientific history will (finally!) be gone.  Publishing will be fully public and searchable.

But most progress is still driven from the top down by organizations – and ultimately funders.  Do-gooders trying to do good, because good just won’t happen on it’s own (I know, I’m one of them).  Our big goal is that everyone, everywhere is conscious and confident of their ability to engage difficult questions using the tools of science and inquiry.  Specifically that everyone has the capacity, knowledge, and reach to do meaningful science and research regardless of location, class, or education.  Supporting this capacity is the mission of GOSH, and many others in the open science community.

This is not to say people aren’t smart, capable, and inquisitive right now.  Most people use the scientific method every day – in our homes, in our work, with and on our children, out of curiosity or out of necessity.  But an increasing portion of our lives involves things we cannot understand using the standard tools of sensory experience, anecdotal evidence, and intuition.  For those complex problems, we look “up”… to governments, academics, and industry leaders.  People we are supposed to trust, but very often do not.  Technology gives everyone the capacity to tackle more complex problems, but it does not give back the self-awareness that we’re responsible to do so.  

So what does the world look like when the do-gooders and governments and companies all step aside, and instead of looking up we start looking around – to ourselves, our families, and our communities, to both understand and solve our problems?

Well, what happened when the world awoke to the internet?  Cat videos.  Fan fiction.  Desert phone booths.  Countless IRC chatrooms with people you don’t know.  A new use and scope of the word “viral”.  And, of course, lots and lots of sex (no link needed here I think).  People have an insatiable appetite to communicate – that’s the web.  They have that same appetite to understand.  That’s people-led research.  And like the internet, we’ll be slightly embarrassed by what the sum of our collective efforts says about humanity.

So yeah, some of it will be weird… but that’s totally OK.  People-led research will include things like the effect of positive energy on how rice molds and bigfoot studies.  But it could also identify new and unexpected technologies ignored by scientists, or engage in massive, global research projects not possible without thousands or millions of people.  It will sway wildly with popular opinions and headlines and fads.  It will find its way into under-represented communities and into social justice movements.  Maybe it will grow out of biohackerspaces, science shops, or just nucleate everywhere as technology becomes ubiquitous.  Honestly, who knows… but one thing is for sure – it will be very different than most research done today.

But if you want to present research as a non-scientist, just go on the internet, make a web page or youtube channel and post your experiment and findings – right?

Not really.  There are two problems:

There are few resources to build skills among the 99% to produce higher quality work and…

There’s no meritocratic path to take their work seriously if it is done well.

So those webpages and youtube channels are often poor quality and even the good ones fall on deaf ears.  For all the effort put into making a million grad students into good scientists, no one puts effort into training 100 million citizens in a similar way.   Maybe because those 100 million people aren’t getting grants.  Maybe because they’re not producing intellectual property we can capture.  Maybe because we think they are crazy.  Maybe because “their” problems aren’t “our” problems.  Maybe because their problems don’t create products and generate revenues.  Or maybe doing good science is just too darn complicated unless its your profession and full time job.

The motivation for Our-Sci is that 100 million people can and should do science and research.  We want to help communities do the hard work of answering questions and solving problems in a scientifically rigorous way.  Instead of rejecting, ignoring, or downplaying their work, we believe that humanity will gain a massive ally if everyone is allowed to play the game.  This isn’t to say that the rules of the game will change – high quality and comparable data, scientifically rigorous experiments, peer review, and reproducibility remain the targets to strive for.  But the players, the focus, and the culture will change.  In some ways for the better, and probably in some ways for the worse.  In either case, people-led research will be a radically honest representation of humanity, just as the internet is today.  And we believe the world will be better for it.

Personally, I can’t wait to see the giant wake up.

Posted by gbathree