These simple to follow lessons will help get you going quickly.
Welcome to Flexible Vision.
We are a new type of machine vision company focused on changing the way machine vision is deployed on factory floors.
Finding subtle defects and managing archived images is now simple thanks to our intuitive to use software and hardware solution
Flexible Vision is the first machine vision application to merge the power of the cloud and the reliability and security of the edge.
Do on the go inspection and image archiving on the app
Or use the edge system to integrate with existing automation and add capability to your factory lines.
Flexible Vision opens the door to new and exciting defect detection on organic and dynamic product
the Flexible Vision application is simple to use and creating a inspection model is simple
Quality Engineers now have the power to provide solutions to the factory floor
Contact us to see how Flexible Vision can solve your application
Welcome to Lesson 1.
creating a project .
Building an AI vision model is broken out into 4 steps.
In this lesson we will focus on creating the project and adding the images.
If you don’t already have an account, head back over to the Flexible Vision home page to sign up.
Click the login button in the upper right of the page.
Then Click signup and fill in your login and company information.
Click the launch button and you will be forwarded to your Flexible Vision app dashboard
Click the plus Icon to add a new project.
Type the name you would like to associate to this project and click Add.
you will now see your new project card appear.
the grey circle at the bottom right of the project card indicates completion progress.
lets click on it to move to our first step, adding the images.
There are a few ways to add images to your project.
You can use a U.S.Bee webcam, upload from a file manager, or upload directly from a Flexible Vision on-prem processor.
in this lesson we will focus on the the first two methods.
Select the project and camera from the dropdown menus.
Once you have your product within the cameras view, click the snap button to capture the image.
repeat this process a few times to get a good variety of images.
Usually 5 to 10 image samples is a good starting point.
Now lets add a few images from our file directory.
Simply click the upload button to bring up the drag and drop upload box.
Our images are now added to our project.
Remember, you can always come back and add more images to your training set.
to advance to start tagging the images click the orange tag images button
In summary, we have now created an account, a project, added images from our local webcam, and added images via image upload.
This concludes lesson 1.
Please join me in lesson 2 to start tagging these images.
Welcome to Lesson 2.
Tagging your dataset.
In this lesson we will focus on step two, tagging our image dataset.
To create your tags of items you are interested in detecting, click the add tag button on the lower right quadrant of the screen.
You can add as many tag names as you wish.
In this case, we will add just two.
To start tagging, simply click and drag the bounding box over the feature or item of interest.
it is important to tag all features or items visible within the image.
Having missing or incorrectly tagged images can degrade the quality of your detections.
To toggle between tag types, either press 1 thru the top of the image.
Once all your images are tagged, its a good idea to double check your work.
There are a few shortcuts built into the tagging app to make your work a bit easier.
These shortcuts include saving, adding a tag, duplicating a tag, zooming and more.
Right clicking anywhere on the image will bring up a list of tools along with a shortcut menu.
In summary, we have created multiple tag types, tagged our images, and reviewed a few simple shortcuts.
Please join me in lesson 3 to kickoff the model for training
Welcome to Lesson 3.
In this lesson we will focus on step 3 and 4, creating the model and testing it in the cloud.
To start the training process, click the orange button labeled run training
This will bring up a list of available options you can cater to your specific application.
The first option is choosing high accuracy verse high speed.
high accuracy to good for most applications.
optionally higher speed is available for applications where objects and defects are larger and more obvious.
The lower settings on this menu are used for augmenting your data set to make it more robust and less susceptible to lighting conditions and camera pose.
toggle these settings as needed.
The resolution dropdown sets the resolution of the image used for training and running of the model.
for example, setting a resolution to 768 pixels will resize your dataset and make your model run faster but with less resolution.
so if your defects are small, its best to leave this resolution to the native value of your image sensor.
Click create your model to save these settings and start the training.
Ensure you have tokens in your account.
Tokens can be purchased though our webstore or through your organization administrator.
Head back over to your dashboard to see the progress of your model.
A typical training takes about 20 to 30 minutes to complete.
Once complete, the indicator will go away and your completion circle will now be full and we are ready to test the model’s performance in the cloud.
To test your model, click the the snap and find menu item on the left navigation
From the dropdowns, select your project and camera.
move your item within the camera’s view and click snap.
within a few seconds you will see your models results on the film strip at the bottom of the page.
click on the image to enlarge.
Review your image and ensure it meets your performance requirements.
optionally you can use the drag and drop function if a camera is not applicable to your application.
its always a good idea to run this test on a variety of parts or poses.
This completes this lesson.
In summary, we have augmented our data set, created our AI model, and tested our models results in the cloud.
Please join me in the next lesson where we will setup our on premises processor
Welcome to Lesson 4.
In this lesson we will focus on Setting up your hardware and linking your processor to your account
In this first section we will focus on connecting your Camera, Processor and monitor.
Before we start, make sure to Unbox the hardware and securely mount the equipment.
Start by connecting the touchscreen monitor USB and Display Port to the processor unit.
The Camera can be connected to the dedicated P O E port or USB depending on your camera model.
The Processor is also equipped with 2 non P O E ports Lan 1 for connecting to the factory Lan and Lan2 for Machine to machine communication.
Finally connect the 3 24 volt power supplies as shown. Once these connections are in place.
connect the system to the power outlet and the system will begin its power up
In this second section we will focus on getting your processor connected to your wifi network, registering your device, and lastly, syncing your models.
start by connecting your laptop or smart device to the on-premmessis processor’s hotspot.
Each processor has a unique name that starts with Visioncell.
The hotspot password is
password.
Once connected navigate over to your web browser and
type one nine two
dot one six eight
dot twelve
dot one and press enter.
Click on any of the menu taps on the left to bring up the wifi connection prompt.
select the wifi network you would like this device to be on and the corresponding password.
click the update network settings button and wait for the assigned I.P to be displayed.
the processor is now connected and accessible on our factory wifi network.
we can now move our laptop back over to our factory wifi
Now lets click on the newly assigned i.p address to gain access to register our device.
Click login and if prompted, enable popups and refresh the page.
Click the confirm button and then sign on using your flexible vision account that you created in lesson one.
you will now be re-directed back to your on-premmessis dashboard.
again, click on any of the left menu items to bring up the device name prompt.
this is the name that will show up in the meta data of your predictions.
the status of this device name will also be available in your cloud portal.
Lets now select the models we would like to pull down from the cloud and run on this device.
click sync and wait for the models to be downloaded and deployed.
In this lesson, we have connected our system components, registered device, and synced our models from the cloud.
Join us in the next lesson where we will run these models.
Welcome to Lesson 5.
In this lesson we will focus on running your downloaded model, creating a mask and running a model pre-set.
The preset feature built into the application, will allow you to run a complex program with a simple input from a remote device.
In this lesson we will cover running your model with an without a preset as well as configuring some of the options of a preset.
Once your models have been fully downloaded, head over to the snap and find menu tab.
Go a head and start selecting the camera you would like to use, along with the model name and the version of your model
within a moment, you will now see the live feed of your camera finding your objects or defects.
This is a great place to start to verify the model is performing as expected.
Its a good idea to move the part around or give various samples at this point
Now lets create a new mask to block out areas of the image.
click on the masking menu item on the left.
Again, lets select the camera we would like to use to draw our mask.
there are several tools available for drawing the mask.
use the ones that fit your application best.
when using the polygon tool, you will notice a green dot which is your start and finish point.
to save your mask, click the save icon and give it a unique name.
Now lets move on to creating a preset.
click the eye O Presets on the left menu
Start by selecting the input trigger for this preset,
then continue selecting what model
model version
camera
and mask.
you can also set the minimum confidence score you would like to display on screen as well as toggling
whether you would like to archive the inspections to the cloud.
once fully configured, click save
Now lets head back over to the snap and find page to run our preset with a single click.
these presets can also be triggered by a P.L.C.
robot, or a digital input directly to the processor.
This concludes the lesson 5 training.
In this lesson, we tested our model in the snap and find feature, created a mask, and ran a preset.
Join us in the next lesson where we will run through the camera setup settings.
Welcome to Lesson 6.
In this lesson we will focus on Setting up your camera image and calibrating the camera to real world coordinates.
In this first section we will focus on finding your Camera, and setting up the image.
Lets head over to the Camera Details Tab on the left.
Here we will see a list of connected cameras.
if you don’t see your newly connected camera listed, click the refresh cameras button.
within a few seconds you will see your newly connected camera appear on the list.
Now we can review the camera settings tab.
From the camera dropdown list, lets select the camera we would like to adjust.
On the right side of the page you will see a few common settings.
including, changing the camera name, changing the camera exposure time, and sensor gain.
Adjusting the sensor exposure time will made your image brighter or darker.
the gain setting is useful to amplify the image colors but it can also cause more image noise.
minimal gain is recommended for most applications.
To increase processing speed and remove unwanted area out of an image, we can use the crop region of interest tool.
Simply, left click and drag a box over the area of interest.
click the blue set region button and within a moment the camera feed will only render the selected area.
to undo this feature, click the clear region button.
Within a few moments we will see the camera feed reverted back to the original full view.
Under the advanced tab, you can access a plethera of camera settings, including setting auto exposure,
tuning color channels, and much more
In this next section we will cover removing lens distortion and calibrating your camera to real world coordinates.
Click on the Calibration menu tab.
Select your camera from the dropdown and place your checkerboard calibration grid under the camera’s field of view.
focus your camera to give sharp corners between the squares.
Enter the width of the square, in this case our squares are 20 millimeters.
To calibrate the camera pixels to milimeters and remove lens distortion, we will take a series of 5 images.
order does not matter, but we will want to move the grid to all four corners and one in the middle.
remember to keep the entire grid within the view of the camera.
After taking the 5th image you will notice the image appears much more flat and true.
The calibration has also advanced to the second step of setting our X Y and Rotation reference.
This second step is mostly used when sending coordinates to a robot for pick and place guidance.
Lets transition to move the large QR code under the camera.
Right away you will see the camera track the QR code.
after clicking the snap button, make sure to keep the QR code in the same position while teaching the robot origin X Y positions.
The very center cross hair is the origin position and the X and Y directions are typical to the right
hand rule in robot frames.
In this lesson, we have walked through viewing the available image setup tools and calibrated our camera to be used for robot guidance.
Join us in the next lesson where we will explore creating a flow program.
Welcome to Lesson 7.
Creating your first Flow.
In this lesson we will focus on Understanding the Workspace,
Creating a Flow to Run a Preset, Basics of JSON Formatted Objects, and, Adding Controls to the Dashboard.
To start creating our first flow, lets head over to the node creator tab on the left menu on our on premis processor
The editor window consists of four components:
The header at the top, containing the deploy button and main menu.
The palette on the left consists of the available nodes to use.
The main workspace in the middle where flows are created, and The sidebar on the right.
Start by clicking and dragging the blue inject node from our pallet onto our workspace.
lets also do this with the green debug node.
Now click and drag a line between the two nodes.
This simple flow will send a time stamp number through the wire out to a debug.
lets open the console on the right menu.
You will notice the blue deploy button is now blue.
in order for the flow to run in real time, we will need to click and deploy the flow.
Once deployed, we can click the blue inject node and see the results appear in the debug window.
lets now modify this flow to run a preset on our processor.
if you scroll down on the pallet, you will see a set of Flexible vision nodes.
click and drag the preset node on top of the wire.
double click the newly added node and lets configure it.
this configuration will only need to be done once.
future uses of this node will use these newly added configuration settings.
The workstation name is a unique name that is added to the image metadata.
this name is useful when filtering data in the cloud to know the station where these results came from.
username is admin
password is f v on prem
and the i p address is 172
dot 17
dot 0
dot 1
click add,
then deploy.
we can now open the node to select the preset we would like to run.
in this case we will be using preset number 2.
lets click deploy one more time and test out the flow.
you can see the results appear in the console window.
lets review these results.
The data is displayed in J Son format, a text-based format for representing structured data.
this format allows data to be nested in a tree structure and allows users to pull as much or as little of the data that they are looking for.
The results consist of a variety of information, including camera settings used, image size, model name and version,
processing times, quantity of items found, pass fail details, locations of each item, the image in a base64 format and much more.
In this demonstration we will want to pull out the name of the item found.
to do this we will click on the small arrow next to the item of interest.
this will copy the path of the variable.
to output just the variable of interest, i will paste the path into a new debug node.
click deploy and run the flow.
now we can see two results.
the first is the original message, and the second is the new message with just the word connector.
clicking the flag on the side of the debug node will silence its messages.
now i will only see the debug of interest.
Now that we have a few basics under our belt, lets start customizing our dashboard.
To Start, I want to move our customizable area up to the very top of my dashboard so it is easily viewable to the operator.
to do this, just drag and drop this item onto the predefined boxes on the page.
now lets head back over to node creator to add some tools to this area.
scroll down to the bottom of the pallet and you will see a large list of dash board nodes.
Lets add a button and a text box
I’m also going to add a new node type called a “change node”
this node will allow me to pull out just the variable name, instead of the entire object.
I will overwrite the payload with just the name of the item found.
after stringing these nodes together, i will need to customize he button and text box information.
double click the node and give your button a unique name.
in this menu, you can change the size, text color, box color, location on the page and more.
i will keep the settings default for this demo lets do the same for the text box as well.
with this modified flow, i will expect an operator to click the button on the dashboard and see the results of the item the camera found.
lets clean up some unneeded nodes and test it out.
we can now see our dashboard has our new button and text box showing.
after clicking our new button, we can see the camera took an image
and is displaying its results within the newly added text box.
This concludes our node creator training module
Thanks for following along.
Please join me in the next lesson where we will explore and create post inspection programs.
Welcome to Lesson 8.
Creating your first Program.
Programs are an easy way to run various inspections along side the item detection.
using this product feature is beneficial when needing to send positional information to a robot, reading
a bar code, counting quantity of an item, or determining the surface area of a defect,
In this lesson we will focus on Understanding the program structure,
adding post process inspections to a newly created program, and syncing and running our program on a processor.
Lets login to our cloud portal and head over to the programs tab on the left menu
Here we will see a list of all our programs.
To create a new program click on the Plus icon to add a new program.
lets give it a unique name so its easy to reference.
next we will need to select a project and model version we want to add this program to.
click the add inspection tool button.
our environment allows for you to go two levels deep.
for example if you were trying to read the date on a coin, you would first find the coin, then find the
date on the coin.
for our demo we will look within the entire field of view of the camera and find a connector.
lets now add some inspection tools.
the Quantity tool will count the number of detections above the specified score it found of the connector.
the orientation tool will allow you to upload a reference image of your item and during runtime, the system will output the x y and rotation of the item.
this is typically used for robot guidance.
lets upload a calibrated image from our processor and crop out a single instance of the item we are expecting to find.
the isolate tool is extremely good at removing noisy backgrounds and will highlight just the item of interest.
This tool will also be enabled during runtime if enabled here.
we now need to specify our origin point.
This is a relative point that will be sent to the robot during runtime.
The area tool will run automatically and does not need any special configuration.
we can upload an image just to test that the tool is detecting our item.
the pass fail tool will highlight your images as red or green durring runtime and you can also enable logging of just failed images to the cloud.
in this demo we will pass the result if the connector quantity is equal to 1.
if the system detects exactly one it will be a pass, anything else will be a fail.
lets click save and sync our models and programs to our processor.
on our processor, lets go to our presets tab.
go through and quickly add a new preset for this application.
since this is a new program, we will need to sync it to this device, click on the pull programs button.
you can also sync your models through the settings tab and skip this step.
select the name of the program we created.
there are additional image archiving preferences available here as well.
lets click save and now lets try it out.
from the snap and find window, select the new preset we configured.
the system is detecting the items as expected, click the snap button and verify the the program is outputting the expected results.
Zooming in we can see the red orientation tool is detecting the x y and rotation of the item along with outputting a fail result.
the fail result is due to the quantity of connectors not equal to exactly one like we configured earlier.
lets remove a few and confirm we get a pass result with just one in the field of view.
the results look good.
lets head back over to the dashboard to review the results we expect to see during runtime.
here we can see the result.
you can optionally hide the pass results by clicking the slider in the upper right of the widget section.
within the results you can see we are processing quite a bit of information including area of the item and x y and rotation in millimeters.
all of this information is available in the node creator flow and can be used in your custom application.
this information is also archived to the cloud for future reference
Thanks for following along.
Please join me in the next lesson where we will setup our camera with high speed strobing.
Congratulations on your new enterprise organization.
In this session we will cover how to login and navigate your admin console.
Lets start by heading over to the Flexible vision login page.
Click on the organization login button and type the organization name provided by your flexible vision representative.
Login with the single sign on listed or with a username and password if available.
If you have been assigned administrative rights to this organization, you will see a admin console menu item on the left.
Lets open up this menu item to go through some key features
To invite new members to our organization, click on the invite members button.
this will bring up a dialog box to enter the user’s name, email, and sign on method.
click the send button to send the invite request.
Let me add just one more user for this demonstration.
Have the invited user keep an eye out for an an email that looks something like this and click on the invite accept.
once the user has accepted the invite refresh this page and you will now see them under members.
by clicking on any of the area to the right of the user name, you will bring up a edit box that allows
tokens and storage will be decremented from the organization and added to the specified user.
Limiting storage values may prevent data from syncing to the cloud if the devices are linked to the user being edited.
limiting tokens will prevent the ai trainings from being created
along with the usage of the cloud snap and find feature.
lower down on the page you can edit your company name
logo
color theme
and enable or disable project sharing.
to change your company logo, find your logo online, then copy and paste the image address. to change your theme colors,
click on the colored buttons and select your color choices.
don’t forget to click the update button to save your theme.
If you have any questions about any of these features, please reach out to our team for more information.
Thanks for watching.
In this quick tip we will sync your vision models from the cloud to your local processor.
Lets start by navigating over to the settings page of your local processor.
Click on the sync dropdown then click sync models.
within a moment a list of your available models will be displayed.
select the checkboxes of all the models you would like to pull down to this device, then click sync
note that the latest models will always be at the top of the list.
the version number is a timestamp and the greater the timestamp, the newer the model is.
The models are now being synced.
you can check the status of the download by clicking on the bell in the top right of the screen.
download times vary depending on your internet speed and number of models being synced.
A typical download takes about a minute.
once the bell is no longer visible, the models are ready to run.
If you are using the preset feature, make sure to update your presets to use your latest downloaded
version by navigating to the presets menu item and selecting the new version from the dropdown.
Then click save.
If you have any questions about any of these features, please reach out to our team for more information.
Thanks for watching.
Calibration grids for your cameras. The grids below will allow you to interface your camera to real world coordinates for robot guidance.
Use this calibrate real world coordinates to your application
Use this calibrate real world coordinates to your application
Use this calibrate real world coordinates to your application
Use this calibrate real world coordinates to your application
Welcome to Lesson 1.
creating a project .
Building an AI vision model is broken out into 4 steps.
In this lesson we will focus on creating the project and adding the images.
If you don’t already have an account, head back over to the Flexible Vision home page to sign up.
Click the login button in the upper right of the page.
Then Click signup and fill in your login and company information.
Click the launch button and you will be forwarded to your Flexible Vision app dashboard
Click the plus Icon to add a new project.
Type the name you would like to associate to this project and click Add.
you will now see your new project card appear.
the grey circle at the bottom right of the project card indicates completion progress.
lets click on it to move to our first step, adding the images.
There are a few ways to add images to your project.
You can use a U.S.Bee webcam, upload from a file manager, or upload directly from a Flexible Vision on-prem processor.
in this lesson we will focus on the the first two methods.
Select the project and camera from the dropdown menus.
Once you have your product within the cameras view, click the snap button to capture the image.
repeat this process a few times to get a good variety of images.
Usually 5 to 10 image samples is a good starting point.
Now lets add a few images from our file directory.
Simply click the upload button to bring up the drag and drop upload box.
Our images are now added to our project.
Remember, you can always come back and add more images to your training set.
to advance to start tagging the images click the orange tag images button
In summary, we have now created an account, a project, added images from our local webcam, and added images via image upload.
This concludes lesson 1.
Please join me in lesson 2 to start tagging these images.
Welcome to Lesson 2.
Tagging your dataset.
In this lesson we will focus on step two, tagging our image dataset.
To create your tags of items you are interested in detecting, click the add tag button on the lower right quadrant of the screen.
You can add as many tag names as you wish.
In this case, we will add just two.
To start tagging, simply click and drag the bounding box over the feature or item of interest.
it is important to tag all features or items visible within the image.
Having missing or incorrectly tagged images can degrade the quality of your detections.
To toggle between tag types, either press 1 thru the top of the image.
Once all your images are tagged, its a good idea to double check your work.
There are a few shortcuts built into the tagging app to make your work a bit easier.
These shortcuts include saving, adding a tag, duplicating a tag, zooming and more.
Right clicking anywhere on the image will bring up a list of tools along with a shortcut menu.
In summary, we have created multiple tag types, tagged our images, and reviewed a few simple shortcuts.
Please join me in lesson 3 to kickoff the model for training
Welcome to Lesson 3.
In this lesson we will focus on step 3 and 4, creating the model and testing it in the cloud.
To start the training process, click the orange button labeled run training
This will bring up a list of available options you can cater to your specific application.
The first option is choosing high accuracy verse high speed.
high accuracy to good for most applications.
optionally higher speed is available for applications where objects and defects are larger and more obvious.
The lower settings on this menu are used for augmenting your data set to make it more robust and less susceptible to lighting conditions and camera pose.
toggle these settings as needed.
The resolution dropdown sets the resolution of the image used for training and running of the model.
for example, setting a resolution to 768 pixels will resize your dataset and make your model run faster but with less resolution.
so if your defects are small, its best to leave this resolution to the native value of your image sensor.
Click create your model to save these settings and start the training.
Ensure you have tokens in your account.
Tokens can be purchased though our webstore or through your organization administrator.
Head back over to your dashboard to see the progress of your model.
A typical training takes about 20 to 30 minutes to complete.
Once complete, the indicator will go away and your completion circle will now be full and we are ready to test the model’s performance in the cloud.
To test your model, click the the snap and find menu item on the left navigation
From the dropdowns, select your project and camera.
move your item within the camera’s view and click snap.
within a few seconds you will see your models results on the film strip at the bottom of the page.
click on the image to enlarge.
Review your image and ensure it meets your performance requirements.
optionally you can use the drag and drop function if a camera is not applicable to your application.
its always a good idea to run this test on a variety of parts or poses.
This completes this lesson.
In summary, we have augmented our data set, created our AI model, and tested our models results in the cloud.
Please join me in the next lesson where we will setup our on premises processor
Welcome to Lesson 4.
In this lesson we will focus on Setting up your hardware and linking your processor to your account
In this first section we will focus on connecting your Camera, Processor and monitor.
Before we start, make sure to Unbox the hardware and securely mount the equipment.
Start by connecting the touchscreen monitor USB and Display Port to the processor unit.
The Camera can be connected to the dedicated P O E port or USB depending on your camera model.
The Processor is also equipped with 2 non P O E ports Lan 1 for connecting to the factory Lan and Lan2 for Machine to machine communication.
Finally connect the 3 24 volt power supplies as shown. Once these connections are in place.
connect the system to the power outlet and the system will begin its power up
In this second section we will focus on getting your processor connected to your wifi network, registering your device, and lastly, syncing your models.
start by connecting your laptop or smart device to the on-premmessis processor’s hotspot.
Each processor has a unique name that starts with Visioncell.
The hotspot password is
password.
Once connected navigate over to your web browser and
type one nine two
dot one six eight
dot twelve
dot one and press enter.
Click on any of the menu taps on the left to bring up the wifi connection prompt.
select the wifi network you would like this device to be on and the corresponding password.
click the update network settings button and wait for the assigned I.P to be displayed.
the processor is now connected and accessible on our factory wifi network.
we can now move our laptop back over to our factory wifi
Now lets click on the newly assigned i.p address to gain access to register our device.
Click login and if prompted, enable popups and refresh the page.
Click the confirm button and then sign on using your flexible vision account that you created in lesson one.
you will now be re-directed back to your on-premmessis dashboard.
again, click on any of the left menu items to bring up the device name prompt.
this is the name that will show up in the meta data of your predictions.
the status of this device name will also be available in your cloud portal.
Lets now select the models we would like to pull down from the cloud and run on this device.
click sync and wait for the models to be downloaded and deployed.
In this lesson, we have connected our system components, registered device, and synced our models from the cloud.
Join us in the next lesson where we will run these models.
Welcome to Lesson 5.
In this lesson we will focus on running your downloaded model, creating a mask and running a model pre-set.
The preset feature built into the application, will allow you to run a complex program with a simple input from a remote device.
In this lesson we will cover running your model with an without a preset as well as configuring some of the options of a preset.
Once your models have been fully downloaded, head over to the snap and find menu tab.
Go a head and start selecting the camera you would like to use, along with the model name and the version of your model
within a moment, you will now see the live feed of your camera finding your objects or defects.
This is a great place to start to verify the model is performing as expected.
Its a good idea to move the part around or give various samples at this point
Now lets create a new mask to block out areas of the image.
click on the masking menu item on the left.
Again, lets select the camera we would like to use to draw our mask.
there are several tools available for drawing the mask.
use the ones that fit your application best.
when using the polygon tool, you will notice a green dot which is your start and finish point.
to save your mask, click the save icon and give it a unique name.
Now lets move on to creating a preset.
click the eye O Presets on the left menu
Start by selecting the input trigger for this preset,
then continue selecting what model
model version
camera
and mask.
you can also set the minimum confidence score you would like to display on screen as well as toggling
whether you would like to archive the inspections to the cloud.
once fully configured, click save
Now lets head back over to the snap and find page to run our preset with a single click.
these presets can also be triggered by a P.L.C.
robot, or a digital input directly to the processor.
This concludes the lesson 5 training.
In this lesson, we tested our model in the snap and find feature, created a mask, and ran a preset.
Join us in the next lesson where we will run through the camera setup settings.
Welcome to Lesson 6.
In this lesson we will focus on Setting up your camera image and calibrating the camera to real world coordinates.
In this first section we will focus on finding your Camera, and setting up the image.
Lets head over to the Camera Details Tab on the left.
Here we will see a list of connected cameras.
if you don’t see your newly connected camera listed, click the refresh cameras button.
within a few seconds you will see your newly connected camera appear on the list.
Now we can review the camera settings tab.
From the camera dropdown list, lets select the camera we would like to adjust.
On the right side of the page you will see a few common settings.
including, changing the camera name, changing the camera exposure time, and sensor gain.
Adjusting the sensor exposure time will made your image brighter or darker.
the gain setting is useful to amplify the image colors but it can also cause more image noise.
minimal gain is recommended for most applications.
To increase processing speed and remove unwanted area out of an image, we can use the crop region of interest tool.
Simply, left click and drag a box over the area of interest.
click the blue set region button and within a moment the camera feed will only render the selected area.
to undo this feature, click the clear region button.
Within a few moments we will see the camera feed reverted back to the original full view.
Under the advanced tab, you can access a plethera of camera settings, including setting auto exposure,
tuning color channels, and much more
In this next section we will cover removing lens distortion and calibrating your camera to real world coordinates.
Click on the Calibration menu tab.
Select your camera from the dropdown and place your checkerboard calibration grid under the camera’s field of view.
focus your camera to give sharp corners between the squares.
Enter the width of the square, in this case our squares are 20 millimeters.
To calibrate the camera pixels to milimeters and remove lens distortion, we will take a series of 5 images.
order does not matter, but we will want to move the grid to all four corners and one in the middle.
remember to keep the entire grid within the view of the camera.
After taking the 5th image you will notice the image appears much more flat and true.
The calibration has also advanced to the second step of setting our X Y and Rotation reference.
This second step is mostly used when sending coordinates to a robot for pick and place guidance.
Lets transition to move the large QR code under the camera.
Right away you will see the camera track the QR code.
after clicking the snap button, make sure to keep the QR code in the same position while teaching the robot origin X Y positions.
The very center cross hair is the origin position and the X and Y directions are typical to the right
hand rule in robot frames.
In this lesson, we have walked through viewing the available image setup tools and calibrated our camera to be used for robot guidance.
Join us in the next lesson where we will explore creating a flow program.
Welcome to Lesson 7.
Creating your first Flow.
In this lesson we will focus on
Understanding the Workspace,
Creating a Flow to Run a Preset,
Basics of JSON Formatted Objects,
and, Adding Controls to the Dashboard.
To start creating our first flow, lets head over to
the node creator tab on the left menu on our on premis
processor
The editor window consists of four components:
The header at the top, containing the deploy button
and main menu.
The palette on the left consists of the available
nodes to use.
The main workspace in the middle where flows are created,
and The sidebar on the right.
Start by clicking and dragging the blue inject node
from our pallet onto our workspace.
lets also do this with the green debug node.
Now click and drag a line between the two nodes.
This simple flow will send a time stamp number through
the wire out to a debug.
lets open the console on the right menu.
You will notice the blue deploy button is now blue.
in order for the flow to run in real time, we will
need to click and deploy the flow.
Once deployed, we can click the blue inject node and
see the results appear in the debug window.
lets now modify this flow to run a preset on our processor.
if you scroll down on the pallet,
you will see a set of Flexible vision nodes.
click and drag the preset node on top of the wire.
double click the newly added node and lets configure
it.
this configuration will only need to be done once.
future uses of this node will use these newly added
configuration settings.
The workstation name is a unique name that is added
to the image metadata.
this name is useful when filtering data in the cloud
to know the station where these results came from.
username is admin
password is f v on prem
and the i p address is 172
dot 17
dot 0
dot 1
click add,
then deploy.
we can now open the node to select the preset we would
like to run.
in this case we will be using preset number 2.
lets click deploy one more time and test out the flow.
you can see the results appear in the console window.
lets review these results.
The data is displayed in J Son format, a text-based
format for representing structured data.
this format allows data to be nested in a tree structure
and allows users to pull as much or as little of the
data that they are looking for.
The results consist of a variety of information,
including camera settings used, image size, model
name and version, processing times, quantity of items
found, pass fail details, locations of each item,
the image in a base64 format and much more.
In this demonstration we will want to pull out the
name of the item found.
to do this we will click on the small arrow next
to the item of interest.
this will copy the path of the variable.
to output just the variable of interest, i will paste
the path into a new debug node.
click deploy and run the flow.
now we can see two results.
the first is the original message, and the second
is the new message with just the word connector.
clicking the flag on the side of the debug node will
silence its messages.
now i will only see the debug of interest.
Now that we have a few basics under our belt, lets
start customizing our dashboard.
To Start, I want to move our customizable area up
to the very top of my dashboard so it is easily viewable
to the operator.
to do this, just drag and drop this item onto the
predefined boxes on the page.
now lets head back over to node creator to add some
tools to this area.
scroll down to the bottom of the pallet and you will
see a large list of dash board nodes.
Lets add a button and a text box
I’m also going to add a new node type called a “change
node”
this node will allow me to pull out just the variable
name, instead of the entire object.
I will overwrite the payload with just the name of
the item found.
after stringing these nodes together, i will need
to customize he button and text box information.
double click the node and give your button a unique
name.
in this menu, you can change the size, text color,
box color, location on the page and more.
i will keep the settings default for this demo
lets do the same for the text box as well.
with this modified flow, i will expect an operator
to click the button on the dashboard and see the results
of the item the camera found.
lets clean up some unneeded nodes and test it out.
we can now see our dashboard has our new button and
text box showing.
after clicking our new button, we can see the camera
took an image
and is displaying its results within the newly added
text box.
This concludes our node creator training module
Thanks for following along.
Please join me in the next lesson where we will explore
and create post inspection programs.
Welcome to Lesson 8.
Creating your first Program.
Programs are an easy way to run various inspections
along side the item detection.
using this product feature is beneficial when needing
to send positional information to a robot, reading
a bar code, counting quantity of an item, or determining
the surface area of a defect,
In this lesson we will focus on
Understanding the program structure,
adding post process inspections to a newly created
program,
and syncing and running our program on a processor.
Lets login to our cloud portal and head over to the
programs tab on the left menu
Here we will see a list of all our programs.
To create a new program click on the Plus icon to
add a new program.
lets give it a unique name so its easy to reference.
next we will need to select a project and model version
we want to add this program to.
click the add inspection tool button.
our environment allows for you to go two levels deep.
for example if you were trying to read the date on
a coin, you would first find the coin, then find the
date on the coin.
for our demo we will look within the entire field
of view of the camera and find a connector.
lets now add some inspection tools.
the Quantity tool will count the number of detections
above the specified score it found of the connector.
the orientation tool will allow you to upload a reference
image of your item and during runtime, the system
will output the x y and rotation of the item.
this is typically used for robot guidance.
lets upload a calibrated image from our processor
and crop out a single instance of the item we are
expecting to find.
the isolate tool is extremely good at removing noisy
backgrounds and will highlight just the item of interest.
This tool will also be enabled during runtime if enabled
here.
we now need to specify our origin point.
This is a relative point that will be sent to the
robot during runtime.
The area tool will run automatically and does not
need any special configuration.
we can upload an image just to test that the tool
is detecting our item.
the pass fail tool will highlight your images as red
or green durring runtime and you can also enable logging
of just failed images to the cloud.
in this demo we will pass the result if the connector
quantity is equal to 1.
if the system detects exactly one it will be a pass,
anything else will be a fail.
lets click save and sync our models and programs to
our processor.
on our processor, lets go to our presets tab.
go through and quickly add a new preset for this application.
since this is a new program, we will need to sync
it to this device, click on the pull programs button.
you can also sync your models through the settings
tab and skip this step.
select the name of the program we created.
there are additional image archiving preferences
available here as well.
lets click save and now lets try it out.
from the snap and find window, select the new preset
we configured.
the system is detecting the items as expected, click
the snap button and verify the the program is outputting
the expected results.
Zooming in we can see the red orientation tool is
detecting the x y and rotation of the item along with
outputting a fail result.
the fail result is due to the quantity of connectors
not equal to exactly one like we configured earlier.
lets remove a few and confirm we get a pass result
with just one in the field of view.
the results look good.
lets head back over to the dashboard to review the
results we expect to see during runtime.
here we can see the result.
you can optionally hide the pass results by clicking
the slider in the upper right of the widget section.
within the results you can see we are processing quite
a bit of information including area of the item and
x y and rotation in millimeters.
all of this information is available in the node
creator flow and can be used in your custom application.
this information is also archived to the cloud for
future reference
Thanks for following along.
Please join me in the next lesson where we will setup
our camera with high speed strobing.
Congratulations on your new enterprise organization.
In this session we will cover how to login and navigate your admin console.
Lets start by heading over to the Flexible vision login page.
Click on the organization login button and type the organization name provided by your flexible vision representative.
Login with the single sign on listed or with a username and password if available.
If you have been assigned administrative rights to this organization, you will see a admin console menu item on the left.
Lets open up this menu item to go through some key features
To invite new members to our organization, click on the invite members button.
this will bring up a dialog box to enter the user’s name, email, and sign on method.
click the send button to send the invite request.
Let me add just one more user for this demonstration.
Have the invited user keep an eye out for an an email that looks something like this and click on the invite accept.
once the user has accepted the invite refresh this page and you will now see them under members.
by clicking on any of the area to the right of the user name, you will bring up a edit box that allows tokens and storage will be decremented from the organization and added to the specified user.
Limiting storage values may prevent data from syncing to the cloud if the devices are linked to the user being edited.
limiting tokens will prevent the ai trainings from being created along with the usage of the cloud snap and find feature.
lower down on the page you can edit your company name
logo
color theme
and enable or disable project sharing.
to change your company logo, find your logo online, then copy and paste the image address.
to change your theme colors,
click on the colored buttons and select your color choices.
don’t forget to click the update button to save your theme.
If you have any questions about any of these features, please reach out to our team for more information.
Thanks for watching.
In this quick tip we will sync your vision models from the cloud to your local processor.
Lets start by navigating over to the settings page of your local processor.
Click on the sync dropdown
then click sync models.
within a moment a list of your available models will be displayed.
select the checkboxes of all the models you would like to pull down to this device, then click sync
note that the latest models will always be at the top of the list.
the version number is a timestamp and the greater the timestamp, the newer the model is.
The models are now being synced.
you can check the status of the download by clicking on the bell in the top right of the screen.
download times vary depending on your internet speed and number of models being synced.
A typical download takes about a minute.
once the bell is no longer visible, the models are ready to run.
If you are using the preset feature, make sure to update your presets to use your latest downloaded
version by navigating to the presets menu item and selecting the new version from the dropdown.
Then click save.
If you have any questions about any of these features, please reach out to our team for more information.
Thanks for watching.
Get a quote in less than 24 hours