gbathree

The future of fresh produce: a skeptical optimist’s view

The future of fresh produce: a skeptical optimist’s view

Imagine if consumers and farmers could measure the nutrient density of fresh produce on farms and in stores in seconds. Consumers would demand nutrient-dense produce because they could see it empirically at the point of sale. Farmers would get a premium for more nutritious crops. Higher prices would motivate farmers to develop farming practices that increase nutrient density in addition to yield. The result? A market-driven, sustainable way to improve the health of millions of people.

The Bionutrient Food Association is trying to make this happen, and Our Sci is going to help.

If you’re an informed foodie/techie, your probably rolling your eyes. There are lots of real problems with this utopic-sounding plan. Well, you’re right… but Our Sci wouldn’t take this on if we didn’t think it was possible, so stick with me.

Here’s the plan…

Build a device to measure nutrient density

Reality check — there is no device capable of measuring every nutrition-related compound of interest. Doesn’t exist. Instead, our strategy is to correlate reflectance data in the UV/VIS/NIR spectra to broad classes of nutrients using standard lab methods (US Pharmacopae, USDA, ASME, etc.). NIR reflectance instruments, like ScIO, have built a lot of hype and failed to deliver… literally.  The concept of building correlations between UV/VIS/NIR and reference data not new. Examples include estimating total carbon in soil (both us and others), total cannabinoids in marijuana, dietary fiber and other compounds in fresh produce, and drug identification in pills.

What we’re attempting is one step beyond those examples — nutrients in fresh produce are present in very small quantities, and their relationship to UV/VIS/NIR will be more complex than the aforementioned examples. Proof of concept work will determine the level of granularity and breadth of compounds that we can predict. Arguments about the tech would take a post of its own and I’m sure I’ll write it at some point… but for now let’s assume we can crack that nut. Next problem: spectral data can’t actually ‘see’ the compounds of interest, so this magic only works if you have a sufficiently large and detailed reference database. Sending a sample to a lab to measure a relatively small number of compounds costs 100s of dollars per sample, so building that reference dataset is no small feat. Enter part 2 of the plan…

Run a national survey of nutrient density in stores and on farms

That’s going to be pricey! We’re working on strategies to generate revenue from collecting the reference data, and lowering the cost of collecting it. We think that even without a device, an effectively designed survey would produce a dataset which could help direct purchasing decisions today. There are many organizations and even individuals interested in making this data publicly available and we are pursuing them as funding partners.

To lower costs, the Bionutrient Food Association’s membership base can help collect the samples. Also, we’ve partnered with Health Research Institute, where we can poach (with permission of course!) incoming samples from other projects/clients and collect measurements using our device alongside their lab measurements.

Make it a movement, not a product

Measuring nutrient density willy-nilly with no feedback mechanism to farmers will not change the food system. We need to establish a conduit between farms and consumers so nutrient density information is traceable. We also need to allow researchers (from academia, industry, and engaged citizens) to identify and share insights from the data. Furthermore, we can help everyone in the system self-organize experiments to test harder problems. Sure — you can mine consumer data to figure out which farms are making the most nutrient-dense tomato. But what if you want to know how tomatoes impact heart disease? Then you need to be able to organize an experiment and invite consumers to join, collaborate, and communicate over time.

Important experiments should be run in the real world, with real people. Well designed collaboration software and public data (with reasonable guards in place for privacy) make those interactions easier and more likely to happen.

Still skeptical?

Ok, one last pitch: even if UV/VIS/NIR reflectance doesn’t work today, some day a new technology will be able to predict the nutrient content of food, easily, accurately, and cheaply. When that day comes, companies will sell you the device. The data streams will be closed, and mined for insights sold to the highest bidder. Researchers will have to pay to use the data (the public won’t see it at all), slowing the pace of learning about how food nutrition impacts human health. The best insights will be kept by the companies, patented, and turned into products (super-food extracts or new drugs or whatever) and sell back to us at 100x markups. It’s not a dystopia — it’s reality. Think I’m overstating it? It’s already happening to your social data. Consumer Physics, the company behind the ScIO which delivered wildly late and completely overstated what the device could do, is now under a patent dispute. Yay, progress… for lawyers, at least.

So more than anything else, this collaboration is about getting ahead of the problem.

Let’s put our flag in the ground: information and technology relating to our food supply should be a public good.

The full campaign, details and project plan are available at the Real Food Campaign section of bionutrient.org. Go follow them on Facebook and Twitter. You can read more from me and other Our Sci folks at our-sci.net.

Posted by gbathree in Blog Posts, Nutrition
Reflectometer progress: circuit board

Reflectometer progress: circuit board

In case you missed the last post, we are building a reflectometer.  The goal is a simple to use, low cost, flexible device for a variety of measurements including tree canopy, soil carbon, and food nutrient density.

If you’re curious about the development process, IRNAS produced a great post walking through the steps from understanding the application to scaled manufacturing.  We are in the rapid prototyping phase, though because we are basing the design on work we already used in the PhotosynQ project, I expect it will be more rapid than usual.  Here’s some updates.

Design Criteria

In order to build a generalized design which can be used in several applications, we tried to build a device which could measure several types of objects effectively.  In order to measure reflectance effectively, the background behind the object of interest must be consistent – variation n the background will produce noise in the signal.  The mechanical design addressed this (see image below).  Here’s the list of materials this device is designed to measure.

handheld reflectometer

Drop of liquid – drop from crushed leaf, drop from crushed food. refraction, brix.
Small cuvette of liquid – juice, chemical mixture. Reflectance at all wavelengths, density, colorimetry
Bulk solid – soil.  Soil carbon.
2D object – leaf, paper. Chlorophyll content
3D object – fruit, vegetable. Reflectance at all wavelengths, correlated to nutritional content (This use cannot utilize a consistent backdrop due to size and cannot require a guaranteed distance to sample)

In many projects, signal quality can be defined from the start.  However, here it’s hard to say if we need less than .5% noise or less than .05% noise without simpling testing on actual samples.  So, we will have a reference set of objects (colored liquids, bulk solids, colored paper or different thickness, etc.) to confirm that we are hitting the quality needed as we iterate on the design.  We are building that reference set now, so when we have our first prototype we will be ready to put it through it’s paces!

The next step was to produce a schematic.  That’s just a fancy word for a wiring diagram – which parts connect to which other parts.  We use a program called KiCAD, which is free and open source and amazing.  Here’s a picture of our schematic for the reflectometer.

reflectometer schematic

Once that’s complete, then you actually have to wire it up.  Here’s what the actual circuit board looks like.

reflectometer layout

You then send this layout to a company which manufactures the board.  Next, someone has to put all the parts of the board – in our case we have about 200 components that need to be soldered onto the board.  Until recently, that’s been the slowest part of rapid prototyping, and causes lots of delays because hand soldering producers errors and takes time and having an outside contractor do it took too much back and forth and time.  But now companies provide fast turnaround (2 weeks!) on fully populated boards, and have huge stock inventories so they don’t often have to order your parts at all – they are all already in house.  That means you can prototype faster and cheaper than ever before.

We are less than a week away from shipping this design out to be routed and populated.  A few weeks after that we should have boards in hand, ready to test against our standards.  If we find we exceed our standards, then we can begin to identify places for cost savings, hopefully getting the board into the $50 in parts range.

If you want to download the schematic, you can find it on our gitlab page – https://gitlab.com/our-sci/reflectance-spec-PCB.  More updates in a few weeks.

 

Posted by gbathree in Open Tech Development
We are building a open source reflectometer… and here’s why

We are building a open source reflectometer… and here’s why

“But wait,” you say, “there are already some out there, and they are pretty well designed and reasonably priced!”  Well, yes – there are full spectrometers like the Spectruino ($411), the Open Source Colorometer ($80 + $20 per LED) from IORodeo, and publications from universities describing open colorometer designs (Appropedia and MTU have a good one, but there are several others – these are DIY so < $100 in parts).  Pretty cheap, and lots of available designs.

Like the seat designed for the average person but usable by no one, product designers should avoid the law of averages.  As in that case, the aforementioned devices are too general purpose to be particularly useful.  The MultispeQ ($600) could work, but was designed for photosynthesis measurements and is over-designed for applications outside of photosynthesis.  For our community partners, none of these devices do exactly they need, which is…

Arborists need a low cost and easy to use chlorophyll meter to add more rigorous sensor data to visual tree assessments.

Consumers + farmers need a way to measure food nutrient density in stores and on farms.

Soil scientists and regulators need to measure soil carbon in the field, quickly and easily.  Doing so could create a massive new pathway for carbon markets to value sequestration of carbon in soil.

Cannabis growers, consumers, and dispensaries need to be able to confirm total cannabanoids and THC levels to comply with regulations, ensure quality product, and identify fraudsters.

These cases require a device which is low cost, easy to use by non-scientists, flexible in what they measure (drops of liquids, cuvettes, leaves, aggregate solids like soil, and whole solids like a pear), usable in field conditions, fast, and open source.  Reflectance is a pretty simple measurement and tells you almost nothing without reference data.  Reference data measures reflectance values and validated lab-based measurements on the same set of samples to build correlations between the two (if they exist!).  But building a reference database can be very expensive.  In the case of food nutrition, measuring a small suite of lab tests for vitamins, minerals and antioxidents can cost $500 or more.  A reference database might contain 100s or 1000s of measurements to have sufficient predictive power.  Yikes!  Expect more on solving that problem in a future post… but for now let’s just get an update on the reflectometer.

Pictures and specs

FYI – We are in the initial stages of design, so everything is in flux and I know this is ugly looking.  Sharing too much too early is in our DNA, sorry 🙂

Our core design is based on the open source MultispeQ (a photosynthesis measurement device), which uses LED supplied light sources at 10 different wavelengths, but is much lower cost.  While this isn’t a full spectrometer, it has the advantage of working independent of ambient light (unlike a normal spectrometer or simple colorimeter where the sample must be in darkness) while being relatively inexpensive (cheaper BOM and less time/cost to make/calibrate).

Ideally, we want users to be able to measure soil carbon, leaf chlorophyll content, brix from extracted sap, and the density of a pear fruit all at the same time with the same instrument.  This not only reduces the cost and increases utility, it also spreads our development costs across multiple applications.  The above design accommodates all of these uses.

This includes a digital tape measure, kind of like the Bagel.  As we validate that design more I’ll post more details.

Here is the link to the 3D design files on OnShape https://cad.onshape.com/documents/849be056da41993fee5440bf/w/fca66e62cdc277f2c4c8e7fb/e/e343e4d4ecd695e477fba916.  The hardware and firmware files will be at the Our-Sci Gitlabs page here https://gitlab.com/our-sci.  It’s a work in progress, so expect to see frequent changes over the next few months.  Much of the hardware, software and firmware has already been tested and validated, so we hope is to have a prototype device ready in only a few months.

We’ll keep the updates going between now and then, so stay tuned or sign up for email updates in the footer of this page.

Posted by gbathree
Agriculture needs an open solution stack

Agriculture needs an open solution stack

Farmers may not know about FOSS, but they know when they’re getting the short end of the stick.

There’s lots of talk about big data and tech in agriculture… and it falls into two main camps – there’s the ‘machines replace farmers’ camp, ranging from automated tractors, to automated home gardening, to home food computers.  These projects are interesting and full of promises but require something of a revolution to actually scale.  Then there’s the ‘farmers buy my black box service’ camp, a multi-billion dollar industry already providing tech services to mostly large farms – Monsanto, John Deere, DuPont all have platforms for collecting data and providing real time feedback (precision-ag, as it’s called), and there’s a huge number of startups working in the same space.  These services tend to be expensive and extremely closed (though some effort is going into creating common APIs at least…), and despite all the hype they are pretty underused on most actual farms.  Worse yet, most of those little startups want to get bought by the big guys… so the future of that space is one of consolidation and monopoly, like many other parts of the ag industry.

Both camps leave typical farmers scratching their heads…  either I’m irrelevant, or I need to pay many thousands of dollars per year for software I’m not sure I need?  Why is [insert big ag company’s name here] taking my data, repackaging it, and selling it back to me in the form of ‘precision ag’?  As a small to medium sized farmer, why can’t I find technology that’s useful but also affordable?  As the son of a farmer, I know that farmers are practical people – if it helps their bottom line, they’ll usually do it, but it doesn’t mean they like it.

Solution stack: “In computing, a solution stack or software stack is a set of software subsystems or components needed to create a complete platform such that no additional software is needed to support applications.” – Wikipedia

I think it’s time we invest in an open solution stack for agriculture.  Platforms and software that can deliver value to farmers today, at a reasonable cost, in a competitive ecosystem that can produce enough value for companies to succeed without fleecing farmers to pay Venture Capital firms and the bottomless stomachs of investors.  That’s not to say a healthy ecosystem of closed and open technology won’t continue to exist, but there is definitely room for alternatives.  There are good analogies here to other software stacks, like LAMP, which is still used in most websites on the internet.  LAMP allowed the web to grow faster, at less cost, with more flexibility and ultimately created more options for the end user, yet closed competitors continue to exist and fill specific market demands.

So… what functionality do we need in an open ag solution stack?

  1. Get data from sensors + the environment.  APIs to connect to existing sensors.  Access to APIs which output weather data, soil data, market information, accounting info, etc.
  2. Get data from humans.  A mobile app which can collect data from farmers, farm workers, accountants, etc. about what’s going on in the real world.
  3. View the farm and the business.  Farmers need to see maps of fields, get updates in real time of activities, get reminders about what’s next and where, etc.
  4. Get analysis and feedback.  Take inputs, run a model, generate (push) outputs.  Maybe catch a pest outbreak before it destroys your field.  Maybe pick the best time to sell your wheat.  Maybe wait a week to fertilize to avoid a rainstorm on Thursday.  That kind of thing.

These are the very basic components – like any ERP it could include so much more, or so much less depending on what the user wants.  By making a base layer of functionality available and low cost, we can move the business opportunities up the chain to services built on top of or connected to the stack.  Pay for Quickbooks to do your taxes, but use their API to integrate your financial data into your farm decision-making.  Pay Precision Hawk to do drone flyovers of your field to improve the quality of your maps, and integrate the maps into the open ag stack.  Pay an agronomist for consulting, but integrate their crop models to get real-time feedback through the open ag stack to increase accuracy and save you (and the agronomist) time.  This allows more efficient and higher quality code as many companies are invested in the same code base.

Another benefit of an open ag stack is the ability to share data.  New analytical tools can generate real value from shared data, but the current options for farms is either to forfeit their data to large companies through closed platforms (exchange data for a service model), or to keep their data completely isolated (exchange money for privacy model).  The first represents a loss of control and of value to the farmer.  The second fails to benefit from shared resources.  An open platform could allow a more flexible, middle option, where data is optionally shared fully, anonymized, or kept private, and the shared or anonymized data is accessible to all.  This would be a boon to researchers to create new methods or identify trends, and to farmers to improve decision-making.

There is some movement already towards an open solution stack. The GODAN initiative is supporting data sharing in agriculture and nutrition by improving data accessibility and collaboration between governments and NGOs.  This could lead to standardizing ag data to improve interoperability between different sensors and software.

FarmOS is a drupal based farm management platform, which emerged from the Farm Hack network.  Now used on over 200 farms across the US, FarmOS is expanding its capabilities and Michael Stenta, the main developer, is committing more time to the project.  Our-Sci is a startup which is using the software framework developed for the PhotosynQ project at Michigan State University.  Our-Sci is a research framework for data collection, sharing, and analysis.  It’s effective for creating and standardizing new methods, and developing feedback based on sensor and survey data.

But a great deal more is needed, as well as a coherent plan for development in the future.  The more organized our development, the easier for developers to contribute helpful code and the more usable a solution for farmers.  If you are interested in contributing to the creation of this open ag stack, please contact us to get involved.

 

Posted by gbathree in Blog Posts, Other Applications
People-led Research: A strange, sleeping giant

People-led Research: A strange, sleeping giant

The sleeping giant, over the Lumponian Homeworld, from Legends of Zita SpaceGirl

People-led research is a sleeping giant, and it might be waking up.  At least, people have been working hard at waking it up.  There are now collaboration platforms for everything including data collection, analysis, product design and development, image identification… the list goes on and on.  Citizen Science projects are booming, and SciStarter is making them easier to find and join.  Cell phones make collaborative data collection easier, and Open Data Kit reduces app development costs for survey-based projects.  Public Lab has pioneered community led technology development, establishing best practices for collaboration online and in person.  And the list goes on.

Slowly but surely, we’re moving to a new reality, much of it driven by technology.   Sooner than we think, we’ll be in a world where sensors are ubiquitous, comparable, of scientific quality, and in the hands of everyday people.  Data sharing will become the norm, so large reference datasets can be built to make the data meaningful.  Contributors will be validated over time, similar to ebay’s buyer/seller ratings system, to improve confidence in the data and establish invested communities.  Satellite, weather, air quality, and related data will be public and accessible anywhere, resulting in near real-time environmental monitoring of every location on the planet.  And the barriers to our scientific history will (finally!) be gone.  Publishing will be fully public and searchable.

But most progress is still driven from the top down by organizations – and ultimately funders.  Do-gooders trying to do good, because good just won’t happen on it’s own (I know, I’m one of them).  Our big goal is that everyone, everywhere is conscious and confident of their ability to engage difficult questions using the tools of science and inquiry.  Specifically that everyone has the capacity, knowledge, and reach to do meaningful science and research regardless of location, class, or education.  Supporting this capacity is the mission of GOSH, and many others in the open science community.

This is not to say people aren’t smart, capable, and inquisitive right now.  Most people use the scientific method every day – in our homes, in our work, with and on our children, out of curiosity or out of necessity.  But an increasing portion of our lives involves things we cannot understand using the standard tools of sensory experience, anecdotal evidence, and intuition.  For those complex problems, we look “up”… to governments, academics, and industry leaders.  People we are supposed to trust, but very often do not.  Technology gives everyone the capacity to tackle more complex problems, but it does not give back the self-awareness that we’re responsible to do so.  

So what does the world look like when the do-gooders and governments and companies all step aside, and instead of looking up we start looking around – to ourselves, our families, and our communities, to both understand and solve our problems?

Well, what happened when the world awoke to the internet?  Cat videos.  Fan fiction.  Desert phone booths.  Countless IRC chatrooms with people you don’t know.  A new use and scope of the word “viral”.  And, of course, lots and lots of sex (no link needed here I think).  People have an insatiable appetite to communicate – that’s the web.  They have that same appetite to understand.  That’s people-led research.  And like the internet, we’ll be slightly embarrassed by what the sum of our collective efforts says about humanity.

So yeah, some of it will be weird… but that’s totally OK.  People-led research will include things like the effect of positive energy on how rice molds and bigfoot studies.  But it could also identify new and unexpected technologies ignored by scientists, or engage in massive, global research projects not possible without thousands or millions of people.  It will sway wildly with popular opinions and headlines and fads.  It will find its way into under-represented communities and into social justice movements.  Maybe it will grow out of biohackerspaces, science shops, or just nucleate everywhere as technology becomes ubiquitous.  Honestly, who knows… but one thing is for sure – it will be very different than most research done today.

But if you want to present research as a non-scientist, just go on the internet, make a web page or youtube channel and post your experiment and findings – right?

Not really.  There are two problems:

There are few resources to build skills among the 99% to produce higher quality work and…

There’s no meritocratic path to take their work seriously if it is done well.

So those webpages and youtube channels are often poor quality and even the good ones fall on deaf ears.  For all the effort put into making a million grad students into good scientists, no one puts effort into training 100 million citizens in a similar way.   Maybe because those 100 million people aren’t getting grants.  Maybe because they’re not producing intellectual property we can capture.  Maybe because we think they are crazy.  Maybe because “their” problems aren’t “our” problems.  Maybe because their problems don’t create products and generate revenues.  Or maybe doing good science is just too darn complicated unless its your profession and full time job.

The motivation for Our-Sci is that 100 million people can and should do science and research.  We want to help communities do the hard work of answering questions and solving problems in a scientifically rigorous way.  Instead of rejecting, ignoring, or downplaying their work, we believe that humanity will gain a massive ally if everyone is allowed to play the game.  This isn’t to say that the rules of the game will change – high quality and comparable data, scientifically rigorous experiments, peer review, and reproducibility remain the targets to strive for.  But the players, the focus, and the culture will change.  In some ways for the better, and probably in some ways for the worse.  In either case, people-led research will be a radically honest representation of humanity, just as the internet is today.  And we believe the world will be better for it.

Personally, I can’t wait to see the giant wake up.

Posted by gbathree