Machine and Human Learning
Before digging into the details, we had to convert the data into a form that could be easily analyzed. During this phase, our data engineering team worked closely with the researchers to mold the data into the right form, eliminate statistical outliers and decide which data subsets were the most relevant to the research questions. During this phase the data was also cleaned and pre-processed.
After the data cleaning, the research team tried to identify which roasting parameters have the largest influence on the quality of the roasted coffee (as measured by cupping score). In this case we used temperature time series data and the corresponding metadata and tried to find relationships with the cupping score. In total we analyzed more than 40 features such as mean and average temperatures in certain phases, deviations, or the skewness and kurtosis of the roast curves. As more than 40 features are hard to visualize and interpret at once, we reduced the scope of our data analysis with a principal component analysis. As we expected, due to the many relevant variables of roasting, we could not find any significant patterns with this first approach, though it did point us in the right direction going forward.
We then analyzed the similarity of a roast curve with its corresponding profile curve. Although this sounds relatively easy, it is actually quite complex. Since roast curves are never exactly the same length, we had to use shape matching algorithms to align the roast curve with its profile curve without losing too much information. Unfortunately, the difference between the roast and profile curve did not show a direct correlation with the cupping score.
Next, we simplified our approach by defining that good roasts have the highest possible normalized score while bad roasts have any score that is smaller than the highest possible score. We applied state-of-the-art machine learning algorithms to the simplified data set and were then able to detect good or bad roasts with a statistically significant level of accuracy.
Then, we intensified our analysis on the detection of flavors that a roast was going to have. The university team developed a new representation of the problem space which allowed them to train machine learning algorithms to predict the flavor labels based on the temperature curve. This task required intensive preprocessing of the labels in collaboration with one of our roasting experts. The results did not only allow the prediction of flavor labels of a roast based on its curve to a significant degree, but also showed for instance that bitter tastes form during roasts that remain longer in a medium temperature range (ca. 125°C - 175 °C) before they reach high ranges (>220C°) later during the roast.
The results of the first project were refined in a second project. In the second project, we improved the mapping between flavors and the temperature curve by adding sentiment labels to the flavor labels and by combining the labels in a systematic manner. This allowed us to reduce the potential classes and eliminate uncommon labels from the target dataset. With the second dataset we were able to expand the prototype for predicting multiple combinations of labels at once.
The Cropster Data Project has already shown us that there is much more to discover. Even with a few small scale research projects, we have already found and visualized relationships between roasting data and quality of the final product. Some of the ideas worked well, and though some proved not to be worth pursuing, we are just getting started and plan to continue our collaboration with enthusiastic researchers from different domains. Many aspects of coffee roasting remain mysterious and leave us with more questions than answers due to the new knowledge we gained. These questions invite further analysis of the coffee roasting process and we are excited to continue that research!