Lobe Help

Everything you need to know to train great machine learning models with Lobe.

Export

Export your model to industry standard formats to use in your app.

How do I use my model?

How do I use my model?

chevron-down

Your model is a collection of files that other programs can load to run predictions. These files store both the structure of your model and the weights that are a result of training.


You can directly upload your model to Power Platform to use in Power Apps or Power Automate. You can also use the model files locally in your own app or in most major cloud platforms to create an API. Lobe also hosts your model as a local API to help kickstart your app development.


Which export option should I choose?

Which export option should I choose?

chevron-down

Lobe provides a few workflows for using models: no-code apps on Microsoft’s Power Platform, calling a local API, adapting starter projects, or working with model files directly.

No-code apps with Microsoft’s Power Platform

If you want to quickly build an app or automation flow without needing to write code, export your model to AI Builder in the Power Platform for use in Power Apps or Power Automate flows. You can connect your model in your app or flow to other external services, including many Microsoft integrations, for easily creating complex end-to-end apps for you or your organization.

Integrate with an external app

If you are using Lobe with another app, check if it can call an API to get predictions over a network or if it can load and run model files directly.


Using an API for predictions

If your desired app can make POST requests, process JSON, and base64 encode images, you can build a network request to get predictions from your model.


If the app is running locally on the same machine or network as Lobe, such as Origami Studio, you can use the Lobe Connect local API without additional configuration. Lobe must be running with your project open for the Lobe Connect local API to function.


If your desired app is running in a different environment than Lobe, or you wish to have predictions without the Lobe app open, see the 'Create an API for flexibility' section below for hosting your model as an external API.


Loading model files directly

If your desired app can load and run external models, select the appropriate Model File export option for your app.


If Lobe doesn’t support that export yet, such as TensorRT or OpenVINO, look for officially supported converters supplied by the desired format. Most of the time a converter will exist from ONNX or TensorFlow model files into the end format. If you are running into difficulties for a particular model, please check out our subreddit community or use File > Report Issue for support.


See Where can I use my model? for more information on the model file exports.

Create an API for flexibility

Creating a REST API is one of the most versatile ways to hook up apps to use your model. You can call your model as a service from other products, deploy to a variety of cloud providers, or even run on your own computer or edge device like a Raspberry Pi.


To prototype or experiment with using an API, check out Lobe Connect. To set one up yourself to run in any location, check out REST Server for an example using Python and Flask with deployment instructions to Azure App Service.


To create an API using a Node server such as Express, use the TensorFlow.js export option and see the example code in the export package for loading and running the model on an image.

Use or customize a starter project

If you want to do predictions in the browser, check out the Web Sample starter project that uses TensorFlow.js without needing a backend server. We recommend using the Speed model from File > Project Settings for faster load times and prediction speed at the expense of potentially lower accuracy when running TensorFlow.js in the browser.


If you want to run your model in a mobile setting, see iOS App for a sample using CoreML for iPhone or iPad apps. Similarly, see Android App for a sample using TensorFlow Lite to run on an Android device. For mobile apps, we recommend using the Speed model from File > Project Settings for faster load times and prediction speed at the expense of potentially lower accuracy.

Classifying images on your computer

You can export your model as TensorFlow and use the Image Tools desktop app to run your model on a folder of images or a spreadsheet of image URLs.

Write an app from scratch

If you have more experience as a developer, you can export model files directly and use their underlying frameworks such as TensorFlow, CoreML, or ONNX.


Where applicable, we recommend using our libraries in Python and .NET for working with Lobe exports as they provide useful helper functions for loading and processing data, as well as formatting return values and even working with the local API for quickly prototyping.

See Where can I use my model? for more information on the model file export options.


Where can I use my model?

Where can I use my model?

chevron-down

We are continually expanding the ways you can use your model. Current recommendations:


No-code apps with Microsoft’s Power Platform

Export your model to Power Platform for use in no-code app development with Power Apps and Power Automate, and to connect with many other Microsoft or external services. Use the Speed model from File > Project Settings if you need fast inference speed or if want to use solutions with Application Lifecycle Management (ALM).


Local Python app or hosted on Azure, Google Cloud, or AWS

Export your model as TensorFlow. TensorFlow’s SavedModel is a standard format used in Python applications running TensorFlow 1.x or 2.x, and can be deployed in TensorFlow web services to run inference on the cloud as an API. See our Python SDK for an example running the TensorFlow export.


Apple iOS

Export your model as Core ML to develop iOS, iPad, and Mac apps. Use the Speed model from File > Project Settings if you need low latency and a smaller memory footprint on iOS.


Android or Raspberry Pi

Export your model as TensorFlow Lite to be used for mobile and IoT applications. Use the Speed model from File > Project Settings if you need low latency and a smaller memory footprint on the edge.


ONNX

Export your model as ONNX for cross-compatible applications, including edge devices and .NET applications.


Web Applications

Export your model as TensorFlow.js for browser-based JavaScript or server-side Node applications. Use the Speed model from File > Project Settings if you need low latency and a smaller memory footprint in the browser.


Lobe Connect

Lobe will host a local API to call your model via a REST endpoint. Use this option to mock a service that runs predictions while developing your app.

To run the local API:

  1. Capture an input image as a base64 string. Make sure the base64 string doesn't include the 'data:image/jpeg;base64,' prefix that sometimes is added.
  2. Do a POST request to the locally hosted API for your project with the details from the export sheet. Here is an example of what to request:
POST http://localhost:38101/v1/predict/project-uuid-from-export-window
Content-Type: application/json
{
"image": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8SEhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEUHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAARCABkAGQDASIAAhEBAxEB/8QAHAAAAQQDAQAAAAAAAAAAAAAABwAEBQYCAwgB/8QAOhAAAQMDAgMGBAQFAwUAAAAAAQIDBAAFERIhBjFBBxMiUWFxFBWBkSOxwdEIMkJSoWJy4TNDY7Lx/8QAGAEAAwEBAAAAAAAAAAAAAAAAAQIDAAT/xAAgEQACAgICAwEBAAAAAAAAAAAAAQIRAxIhMQQTQVEi/9oADAMBAAIRAxEAPwDo80t+dYd4DzNehxNdxwmZP+oD61qcAI3OqvVLTg45+1DTj7tahcIXh233Dhu+qbbGfikNp7lQ/uSc7j7UyVithHBHIHH0rBStI57e9c7Xv+IxKLihu1WtlcPTqWuQ/g49AnrVbvv8Rd1dlpVbokVhpP8ASQVlR9TTqIuzOqw4k8sCvRn+8ZPLArnDhX+ILW+hF3hJ0LV4nGl/y7eRo3cK8TwOIbYiZb39aFjlkZB8iKZwYu36T6s7ArcJ8wMCtakqPMqI98V4FnGkoOPMqofXrtTs1k7SGuDbwwuEX2kLYmOKHdKKuST5eWeVBRbNsgglAGcJT9TWGAD0HsKx75KgC24hWfrWp5TmT4h9BTJUDazNS20qwVnPtSrRrWNsE0qajWS4x55+tIk8jjHnTcKUP6sV4VDO5J+tR1G2HCsAHJ51Uu0cQU2CVImxw8200VBWndPrmrC7LjMlQdkNoKEa1al8k+Z9Kr3aOwifwNdUIdSlKoilhw7gADOce1FRA2cO8evx3pkmTAjoYQ6oJBCcZTvlWBsPXH61E2CyT74+WLZGcXoaLqlFQGUgZJ3O532SNzRIuHAy71wxcJVvcDqGHD3DilFIWgjcf7k6Rt+9DlmVcLPLVDLJQpkgEK/mTj8qnP8AlloNOJLNWaSiCp+LpZW0B3gfawSfc+3KiV2IcaqtN3CZCUo2xJSkeFxP9xHmPTpQqh8RyvmEh6Q4t1pxIBaKcpUcAbjptVj4OgTbZbp9znRXWm1RXCnUnJ8QwnH3poZPgsopnVbvafwo0oMm4N59ATQ27dF8K8ZWVq6264w37lbnAEhvIWptWxTvjO+D/wDa5+kXheSjvQFY2IGBTZFwkKWO9d36LBrqhJIm4HT/AGDdoCyyjhe7PLMlCR8G4vcqQOaCfMfl7UZkuqWN3E/RNcN8NcRyLZdoVwwVuRHkuIUDvseXr1FdgcBcSxuJ7AxdoRHduDdB5oPUGjJpu0TrVll1f+U/alXgdV5ilShs1MXaC7/05bCznHgcSf1rVceILRb0vom3JlmQ0kqVHz4xvy9efShTAissz2HXYzMhttxKnEZwSAeQqJ7WI8yVdUS0yG5HxaNeI0XSphsHdK1DmTjkK5vLyPFHaJfxIwnJ+x8IdcVW/iq/3dV7sd7RcIE/aPFWrQ+2EjJb7vbUBzz1zuKtMDtJMVtFs4ksk6JLbaCXxoGDtjOD0PuaFdrNxau0MO216IHtTUVvC1KUFA504/kzk7c8VaOO75drzwfJjMONouEcBcd1SRrRp5pBPQgEYO1S8TLLJDacaFnT5j0TnGq7FKkM8P8AD6mIR7jvVIZQEpTkA68cuWPrQ5m9n8Nx9Ui+z2rk+gAJW02G1Kx/UvHNR+1USBx9dWLq8JTKw5IQjWyUEhCh/Ngc0g45DberXablcbhKdM4uJcWA4kpVtjYaQOe3rRyBarowidn9oF1Q+26tLKSCUKG59zUd2xXN34Vq2QVBqOgayobFZG3LOMD/ADmn/Ft/VZoWzwW4sEISo4BI6ZoMTZ8qbMU/JWpfi1FOcge3kKGJhptjUj8cpddISASf2rWy+4lSTrOk1olrceecWo5Od6ytmgyCHDtoOnbO9Pu7L6cEpElqbOsBORzGNjXTf8KV57/h6dDWNIbdyATXK2CkDVsc9N6OXY63dOHbKJsV4pclZUpKkak6QcDYb866YOzkyKkdXIeb0jITmlQab4v4j7tP4bJ25qZWD+dKq6EbZEt3kMximSw7FfWD3SJEZSSo42GeW9NbTe7b8tYVcZzqX1BZMVDTisZUccvTHtUjHi215tcqTb47cmORhx9GFJSQCMjPPBxinpj3eRGDsWQ5DBPhUG0k+mxBP5VwrXLm2vrigLs3fNXY0UBq1XBxtBIJKCkDHM5yTUU+u2yWC6WJltJ6KZcW399O1PmjdI7yTO4jLiM4DbbLbalbcsnJ+1e3S9cOoQET2rowhQwXg65pJ/3JO3vtXVqn2Vt1QM+NmG9Kxb5tqbfXsqQt7BSPPAGrP0pcJ8KoQ+3cGr5NuCiNKnENaWtt8alc9+gq1P3bg1GPlEduU6F6h3cb4lZznqfEPvUpEuXEDzLaRw/DKAfw3HnA1gHroUCRS+qIrnJAx7WuGZKLcLql5CG0qAc7wbJJ5E45HbHlQhUpIUUuKIJ5EDINdUXCHxHOtlxZTEtUlRhuhEaM34nTpyD4h05/SucJsKVd7xGgJQszXlhrBb04JPLA8hUZY9XUS+HJa5NNq4bvUuMZUWG8tlfJSU52qXsvZ7xHMmBSYKG0pOVBagNvb9KOvDcO02NqLbkXBwOLU2yhothS1k4AAGnO9WK/WtFq40VFioIbCclOdgojce2c0XjoLyyYI+Eeyz5nKYXMlpSUkeBKc4HX/iiFdobVpmot8WNcfhY7YZQ3HSFeHGMklQP2FWGxRhCmJU4juicBRxkCmV24ehzbxJlOS4rrq8gMvOHSffoKtj/CLdkQ1OZithluFdkpTyCmST/7GlU0zYg20kF20pJGcfEJpVUFA9Q7OkylGJAlJhtq3W6/rWlQGcgrOTjln7edTUOY/NgrZiXuO0Sr8RZd7xbY8kpGMHfzpxbWvlzSWGWFPMFXiPfAjJG6jj2wAOQA8801nW6HcZJQIzDMdk5W4QpCyegzt+dcfhLIsd5FyTixJ4fiyJCHJV3my1J3SErDePtv/mnEW12S3IKvlsLTndTm+o+ZKjvVO4ra4ctUVtKLjNfkrQUj5c9ujrhSlJxjfoTUCxw3cryEyW7uz8OBrR8S/hxIO3iSk4TXS5q6HpsJ8672CI2VmS1CUc5VHe0A++DvVTuPFxjKX8DxG6nSnIL7TbwHlvscn71VZXA7q38P36Ihps4UpAJxk7HcgDPvUvbuCrNHlNKfS7N0nJDzmkHGNyE8h6HNI8j+DaBr7DZt2vPC1w4iubbC2yRGhuoQUh0/9xWPTZP1NRHGkCDGWX2mGIk5eQh5DKC6kddOSN/WrCrtH4btlnhW22xVx4bLRDTCIpCUY5J2GASf3oa8e8VR+IJzEhEFwIaaGQfCrUeYA8hRu3bFqiY4X4jt/DEaRPEWTebwtX4c2SpomMMYCUYOlOcqyeew6Vjw/wATOcRcWRE3ox4zak5cUiUFKfV/aNuRJ/xQ9k223SNKFRnNAAOFEpAPUAJPnTNvhuINLkWVIjaRyJ1JIJ9d+lFlEdL3C8cOuWCTIjzYrhWlfdFCwSVAE4A6HY/ah0u9d0ku/FcwPC6kEf4waDtwg3qyvpchvsuNuDwJZAQtZG5yOZ68q0urmXNxtyWuW46lPiQ0nSE+efpsfapPKo8masI73aJEDq0Jjpe0HSVpUQCaVVGOHIjQY7iQ8Rue6cUAnO4ScHcgY3pVve/wXVEu87xJeoKI1js5QypISZCiEtOA/wBpJGw8jmnDfAF4mY+ZXNTLaU7ojeL0wMnHL3ooR5EZRSo6HUZ0hQIII5cuu/8AzTpZSHO5UtpSdOsaVatW/LHUe/lTQjJL+nYzgkqQND2fWAKaYU9cFJWCe7J0oOOYOBsPrUvFtcCAhtMWIy4lHhDSUpBABydztV1dUW5AQ7pUCkkkE7Y8gNj/AIqKkzggKSph/ZJJWNODjz32/wCaZxthSBtxda2lS8woV4El9wrUmS2hbOrIOwRgAdSeew2qvXO1cS2912W5FkPaUlRcjOHQT54Sc4+lFqSWJ8RtLuju1YISvnz9DzrW3BR3WuNKfbClYwhYKfpn86GtBBva+KG2oiBLS4p3rqOBny332863/OytgvQ4sbUohWXcqJ+gxVzvcWAUATG0TcjOHEJVg9cjHoetVVuDwx8chXcCMtw6UBt1SEqJ9DlOfp9aNCtDBy6T1OJWG2WcDk2FAgnnvz+lNpF6uExQiW+I65II/E8BGnbptz61Z0RrE3GCklcpzkvW+QUjP+kY251OQINvNvcQy2WwvPeBKT4hjr18/aklcuEZKgbWq2OOTzKuPxQfbX4S4rxIII6ciSM8iKIMG1MSAh6WhbneeFKFnBUn1G/Qcv2qPvdnXJBYg3V2Glspd1uN69J2ARzyPfr+cNNsPGLClSWLml8pTqUiO6oKWOg3wPPmehoY8evbMwlNxmgy2huQiGlCdPdp0ge/I596VCdjiriiEj4eVbFhxJO3cKVt03HOlXRURKDK8lDMdh5LaCvA3I/b3qRLeyXdairUkHIBzkZ329KVKsy0hrOcUlS285CUZBP0H61T76rDuQD4FpyNSsKOAcnf1pUqEQIbxu+L7mqW+QlWlKcgAb4GwHSrIFFNpdeSAFDCNvLUR+gpUq0ugfSl8XuOswlhDqx368OHVuev6VRpDypc2DFdA7tKg3tkEgasZ9qVKlZmEy3Ro6W9IZThISg7k6h67+g5VNtRGkuLDZU3pQT4cDPp7UqVUiuBDbDjtFtKSjZwb+nP9q8fjNpTsVAuFJUc+4pUqwpi60nIIzkjJpUqVMY//9k="
}

Spreadsheets and local images

Export your model as a TensorFlow SavedModel to use with our helper code for running predictions on spreadsheets of image URLs or local image files.


What is Power Platform?

What is Power Platform?

chevron-down

Power Platform is Microsoft’s no-code/low-code platform for building applications and automation flows for your business. Use your model trained in Lobe with Power Apps and Power Automate to add intelligence to your business solutions.


Learn more about Power Platform here.


How do I export my model to Power Platform?

How do I export my model to Power Platform?

chevron-down

You can easily upload your model to AI Builder to use in the Power Platform. Select Power Platform on the Export tab and sign into your work or school Microsoft account. Upload your model and view it in AI Builder within your Power Platform account.


Learn more about Power Platform here.


How do I use my model in Power Platform?

How do I use my model in Power Platform?

chevron-down

Once you’ve uploaded your model to Power Platform, you can view your model in your environment. Then you can use your model in Power Apps and Power Automate.


Learn more about Power Platform here.


Can I see example code?

Can I see example code?

chevron-down

When you export a model, the exported folder contains example code to use your model:

  • example/ directory: Lobe provides an example with a readme on how to load and run your model in the chosen export format.
  • signature.json file: the signature contains information about your model that your app will generally have to provide when loading the model. This includes details about the exposed inputs and outputs, their data types and shapes, and the labels corresponding to predicted confidence outputs. See the example/ directory for code on how to load and use the signature.


Can I see some starter projects?

Can I see some starter projects?

chevron-down

Lobe provides example projects on GitHub to help you get started.


iOS

A sample iOS application running the exported Core ML model: https://github.com/lobe/iOS-bootstrap


Android

A sample Android application running the exported TensorFlow Lite model: https://github.com/lobe/android-bootstrap


Web

An example React web application that runs predictions on images from your webcam in the browser: https://github.com/lobe/web-bootstrap


An example Flask server that runs your model as an API service: https://github.com/lobe/flask-server


.NET

A .NET library and example command line app to run predictions on images from your computer: https://github.com/lobe/lobe.net


Raspberry Pi

An example Python application that runs predictions on a Raspberry Pi: https://github.com/microsoft/trashclassifier


What is Lobe Connect?

What is Lobe Connect?

chevron-down

Lobe Connect is a local API to call your model via a REST endpoint. We recommend using Lobe Connect to prototype development with a REST service, connect to apps and tools locally, or run your model on devices connected to your same local network.

You need to have Lobe running with your project open for Lobe Connect to enable your local API.


Does Lobe have an SDK?

Does Lobe have an SDK?

chevron-down

Yes! Lobe provides a Python SDK and .NET SDK to run the exported models.


Train your app with Lobe
Download