July has been a productive month for us. It’s been a month since we have released our first competition to build a machine learning tool for detecting COVID-19 using blood tests. There are already several submissions and models, and we have some great news for you!
This month, we have delivered two major features, which we would like to share with you. In a nutshell, they are the following.
- The last week’s best model is made available for everyone to use.
- Winners can upload their inference code via a new UI, instead of emailing it to us.
Read on to get to find out about them in detail! We will also give a sneak peek into the upcoming new features, which we are working on. Would love to hear your feedback on those!
Releasing the best model every week
We promised to make the best model available every week, and we have just delivered this feature yesterday. You can try it out right now at the model tab of the COVID-19 competition dashboard.
This is a feature of particular importance to us. Most of us are coming from a computational biology background, where deploying deep learning models and making methods widely available is often neglected. (This was one of the reasons why some of us have created nucleAIzer, a free online tool to detect cell nuclei in microscopy images.) With this, we are hoping to help researchers and practitioners in the fight against COVID-19.
Model upload UI
One of the cornerstones in our competitive crowdsourcing model is the emphasis on production. We aim to deploy the best solution every week to build the competition itself into the build-test-learn development cycle.
So, users can not only submit predictions on the test set but the inference code as well. In this first month, the inference code submission happened through email, which is, let’s face it, not the best user experience. One of our top priorities was to add a UI element, where users with the №1 submission can upload their packaged inference code.
After it is uploaded, it goes through our quality assurance pipeline, making sure for example that it is not overfitting on the test set. In future competitions, the submitted model passing our tests will trigger the reward payment for the weekly winner.
To find out more about this and other future plans, read on!
For the next few weeks, we have two things in our crosshair: a payment solution and user profiles.
Our vision is to give monetary rewards to our competitors every week. To do this, we need to implement a solution to transfer money regularly. This has significant technical, legal, and tax challenges for us. However, it is our number one priority: without this, our platform is not the competitive crowdsourcing marketplace that we have imagined it to be.
Do you have a preferred solution? Would you prefer Stripe or Hyperwallet? Shoot us a message at telesto.ai/contact!
User profiles and badges
We are proud of everyone who participates in a machine learning competition and maybe even creates a winning solution. If you belong to this group, you should be proud of yourself as well. We want you to be able to display your achievements in your user profile.
Top-performing model for ten weeks in a row? Highlight that in your profile.
Leading competitor in the object detection category? Highlight that in your profile.
What are the things you want to display? What would you not? Tell us and we’ll do our best to make it happen!
Have a suggestion? Let us know!
The most important part of telesto.ai is You. This platform is being built for You, to democratize machine learning and enable everyone to participate in real-life projects — while making money.
If you would like to take part in this journey, we would love to hear from you! Whether you have a feature request, bug report, or a question, feel free to drop us a message!