Skip to main content

Wildlife Insights encourages any progress to advance wildlife conservation. Wildlife Insights is the largest and most diverse collection of camera trap images that is open to the public. Wildlife Insights also provides unique tools that are not available anywhere else, including artificial intelligence models for species identification, automated statistics and visualizations. Wildlife Insights is the only system that provides all of these features in one place, making it easy for decision-makers to access the information they need to protect wildlife.

Most of the Wildlife Insights partners have been using camera traps to collect information on wildlife populations for years. Wildlife Insights grew out of a joint recognition that the data being collected individually could provide much more value to the conservation community if brought together, standardized and made openly available. The data in Wildlife Insights at the time of the first release is contributed by Wildlife Conservation Society, Smithsonian Conservation Biology Institute, North Carolina Museum of Natural Sciences, WWF, Conservation International and the Tropical Ecology and Monitoring (TEAM) Network. As users upload data into Wildlife Insights, the number of contributors and database will grow in size and representativeness, and the species identification AI will continue to improve.

Anyone can browse, discover, and download public data on the platform. After the first release, whitelisted users can use all of the features in Wildlife Insights to create projects and initiatives, upload images, identify species using artificial intelligence, view basic analytics and download their own data.If you have camera trap data to share and would like to use Wildlife Insights to streamline cataloguing of your data, contact [email protected] with your name, organization, and a description of your project(s) and dataset.Learn more about how Instituto Humboldt in Colombia has monitored its incredibly diverse wildlife in a changing political landscape with camera traps in this video.

AI models, developed by Google, have been trained on 8.7M images to automatically filter out blank images and identify 614 animal species in a fraction of a second. An expert can process anywhere from 300-1000 camera trap images per hour. If that task is sent to hundreds or thousands of machines in parallel using Google Cloud Platform, that task of processing camera trap images to find those containing animals is tens of times faster. This allows biologists to spend time on the animals of their interest, instead of sifting through thousands of empty images looking for animals.

The AI model is trained on images from Conservation International's Tropical Ecology and Monitoring (TEAM) Network, Snapshot Serengeti, Caltech Camera Traps, North American Camera Trap Images, WWF and One Tam, which include 614 species from around the world.

We are adding to this core training dataset frequently on an ongoing basis with data from Wildlife Insights core members: Conservation International, Smithsonian Institution, North Carolina Museum of Natural Sciences, Wildlife Conservation Society, ZSL, and WWF. We have also trained using openly available datasets on lila.science.

Our training data contains images labeled with WI taxonomy including Class > Order > Family > Genus > Species. Species, which is the most granular level, is used as a class label to train a multi-class image classifier.

We are using a deep convolutional neural net for multi-class classification using Google’s open source TensorFlow framework to train an AI model to identify animal species in camera trap images. 

Like humans, AI models generally get better at recognizing and identifying animals if they can look at hundreds or thousands of diverse images of that particular species. If you have camera trap data with many images of popular species or even a few images of rare species, we encourage you to contact us to get trusted tester access to Wildlife Insights, so that you can more easily manage and identify your camera trap images and contribute to the accuracy of Wildlife Insights AI models. 

 

Perhaps most importantly, AI can help identify images without animals where things like blowing grass can trigger a camera, which can be up to 80% of a dataset. Leveraging Google AI Platform Predictions, this functionality alone can dramatically reduce the amount of time spent processing and identifying camera trap data. 

The first task for AI has been to identify images not containing any animals, since no one wants to look at thousands of empty images. The AI models in Wildlife Insights catch 78.7% of blank images with an error rate of less than 2%. This allows Wildlife Insights users to minimize the amount of human involvement in the process of sifting through millions of images looking for wildlife, and allowing tens to hundreds of machines to do that work for them in a fraction of the time. Upload your images, put computer vision to work for you, and when the results come back you can focus on the images that need your attention. 

 

Across the 614 species that the models have been trained on classes like blue duiker, African elephant, southern pig-tailed macaque or a small antelope called Suni have between 80% and 98.6% probability of being correctly predicted by the AI models. 

Categorizing species in camera trap data can be very challenging, even for humans.  Data quality can play a huge part in our ability to correctly classify an image, and even human experts struggle with images that are poorly illuminated, blurry, or where the animal is very small, hidden behind vegetation, or far away.  There are also many sets of species that are easily confused, like bobcats and lynx. Images that have low data quality or contain easily-confused species are harder for both humans and AI.  That said, AI improves when given many, diverse examples of a given species.  

This is where you can help!  You can correct any mis-labeled species using the Wildlife Insights interface and improve the model accuracy for that species class. In addition, by adding your data to Wildlife Insights, you can help provide sufficient examples of each species to our AI systems so that the species can be accurately identified in the future. By uploading data from your unique camera traps you are not only improving species accuracy.  Each individual camera trap has a set of biases, such as the background, perspective on the animals, and lighting conditions. Your data is also increasing the camera diversity of our dataset, and in turn improving the robustness of our AI models to varied camera conditions.

Once you’ve uploaded your camera trap images to Wildlife Insights, you will see the per-image classification confidence displayed alongside the image in the Identify section of Wildlife Insights. 

You can also look up individual species to see if that species has examples in our current training dataset, and what the model performance is on that class. 

Search for a species of your interest. Let’s take an example of mule deer. If there were total 100 images of mule deer in the data you uploaded, we will be able to identify 81 of those as mule deer. This is the recall for the class. If we predict 100 images as mule deer, about 93 of them are likely to be actually mule deer and we may misclassify 7 of them as something else. That is the precision for the class.

If you see “Needs More Data” in the metrics, this means our AI model is not able to predict a single class with confidence above a fixed threshold (threshold is tuned manually). This may happen due to a low number of images of this species in training, due to less diversity in the images (e.g. all from same camera location, similar background etc.), or just because the characteristics to identify the species is shared between many species which confuses the model. You can help improve our model accuracy by contributing additional data to Wildlife Insights for your species of interest. As users upload more images - from different regions, more diverse species etc., our AI models will get better at recognizing more species.

If you do not see your species of interest listed it means that we currently have no examples of that species in our dataset. This is all the more reason to contribute, so we can continue to grow the number of supported species in Wildlife Insights.

Learn more about assessing classification accuracy for AI models in general.

Convolutional neural networks are a widely successful AI paradigm for computer vision models. At a high level, the model takes an image as a 2D input (array of pixels in single channel or RGB channels) and runs mathematical operations in a set of steps. Each step is referred to as a layer. There are some peculiar layer types used in CNNs for images, like convolution and pooling. Multiple such layers are collated together to form a deep convolutional neural network.

Some models have been trained using large amounts of generic image data that can be re-purposed by tuning them to specific problem (like species identification in camera trap images). We start with one such model called Inception-V4 and fine-tune the model for species classification in camera trap images using labeled data from Wildlife Insights.

Why fine-tune from a pre-trained model?

Fine-tuning is done to adapt the model to characteristics of camera trap images e.g. blurring, low lighting, etc. There are many common characteristics learned by the pre-trained model, like detecting edges of objects or identifying patterns like stripes and spots. These generic visual features are useful for identifying species in camera trap images, and by fine-tuning from a model that already has these capabilities, we are able to quickly leverage those features for our species classification task.

How is the model evaluated?

We believe it is important to evaluate our models in a method that is similar to how they will be used. We want to ensure that models will work well for new users, uploading data from camera locations unseen during training. In order to evaluate how well the model does on unseen data, we hold out the images from some of the camera locations in our dataset to serve as an unseen “test set”. We bin all of our dataset camera locations into 10x10meter lat/long grids, and then select a random set of these grid cells to serve as our test set. This ensures that we do not train and evaluate on similar images (e.g. same background), which may lead to incorrectly high accuracy numbers.

When a new image is uploaded, we do a forward pass over the trained network (i.e. run through all the layers one-by-one) and extract a probability distribution over all species classes. We then select the class with highest probability as the predicted class. We consider the probability of the highest class to represent the “confidence” of the model in its prediction1. In some cases, the model does not predict any of the classes with a high probability. When this occurs, we return “NO CV Result”, short for “No Computer Vision Result,” instead of returning a low-confidence species prediction. As the training dataset grows, our model will become more confident and return fewer “No CV Result” predictions.

Wildlife Insights is hosted on the Google Cloud Platform, and inferencing is done using Google AI Platform Predictions. Once the images are uploaded, depending on your network bandwidth, we are capable of handling hundreds of queries per second (qps) for online prediction, parallelized across hundreds to thousands of machines. On a single GPU, we can process about 18,000 images per hour which can be scaled further by running across hundreds of GPUs. For reference, a human expert can label 600 images per hour. The purpose of AI models is to assist human experts by freeing them of flipping through most of the images, leaving only a fraction for their expert opinion.

If you have images that you have already labelled, that you would like to contribute to improving our AI models, or images that you would like to upload to Wildlife Insights, contact us to work on ingesting your data.

When you upload your camera trap images to Wildlife Insights, our AI models run image classification in the background and in the “Identify” tab, you can see the results. Alongside the image we display the common name (Genus and species) as well as confidence.

We return the species label for an image only if we are relatively sure about our prediction. If the AI is not confident, we return "No CV result" for “No Computer Vision result.” This confidence is based on the score the model returns for the top-most predicted class, which is on the scale of 0-100%.

The chart below indicates how the model performs for specific species. For example: if you upload an image that contains a "Red Deer", then 95.30% of the times we will correctly identify the class, but 3.42% we may not be confident about our prediction, so we return no result. This may happen due to multiple reasons, for example, if we did not have enough diverse images (different backgrounds, different profiles of the animal, different lighting conditions etc.) for this species when training or we have multiple similar looking species that confuses our model.

Model Performance

Our AI models are still learning! If you see an error in classification, you can click “Edit Identification” and correct the species label. This helps our model get better and better at improving for that species class.

Users of Wildlife Insights have the capability to edit the suggested result from the Wildlife Insights AI models. When we see that enough new images have user-generated or edited classifications, we will retrain our models with this new data. If you have camera trap images, you can directly contribute to the improvement of Wildlife Insights’ AI models, to help accurately identify the animals you care about. Please contact us to become a Wildlife Insights trusted tester.

For mammals, Wildlife Insights uses the IUCN Red List of Endangered Species as the primary taxonomy. For birds, we use Birdlife International’s taxonomy. We also have several classes for non-animals, such as car, equestrian, domestic dog, etc.

Users are also able to add custom notes to image metadata for local or indigenous names of species.

Wildlife Insights and Google are focusing on developing a model that can accurately identify species but not individual animals. There are other groups that are successfully training computer vision algorithms to identify individual animals, and we hope to work together with those groups in the future to continue to expand the scope of Wildlife Insights.

The infrastructure for Wildlife Insights can support video and other types of sensor data, including acoustic data, but initially only provides support for camera trap images. The long term plan for Wildlife Insights is to support multiple sensor data types.

For our first release, we really want to hear from you if you already have camera trap data, whether you’ve already catalogued it and labelled species in images or not. Wildlife Insights is open to anyone involved in camera trapping to advance wildlife conservation. Camera trap data providers may sign up for an account to share data and anyone can browse the global database.

Please contact us if you’d like to get added to the trusted tester group, in order to share your data and run it through the Wildlife Insights AI models.

Users are able to upload images containing humans to Wildlife Insights, and they will be classified by AI models in the uploading process. However, images of humans will not be made public, nor are they downloadable. Some wildlife researchers are interested in studying human-wildlife interactions. In order to facilitate this, metadata for images containing humans (that does not contain any personally identifiable information) will be available to the public for download (please refer to our Terms of Use for more information).

Yes! Depending on what group you most align with.

The field of species classification on sensor-based data is really just beginning. There are a number of approaches we will explore to continue to improve the models….and, if we haven’t mentioned it yet, more data always helps!

Here are some techniques and experiments we may run to improve our results:

All data from Wildlife Insights core partners will be shared with the public when the platform is released. However, some users may want to keep data private to comply with legal requirements or to publish research. Those users will be able to embargo data for a limited amount of time and images will eventually become public.

After the first release, anyone can download public data* from Wildlife Insights. Initially, all public metadata (including identifications and locations) can be downloaded. Downloads of public images will be made available at a later date. All data are licensed under Creative Commons and can be used according to the designated license.

*Public data will never include embargoed data, images of humans or the exact location of sensitive species.

Any data published on Wildlife Insights may be used by Wildlife Insights to develop aggregated data products, including global analyses. Wildlife Insights may use these analyses to produce annual reports on the state of wildlife.

Wildlife Insights is committed to making the platform available to anyone who is working to advance wildlife conservation. The platform will be free of charge initially, and Wildlife Insights is exploring tiered services, from Basic (free) to Premium (subscription-based).

Membership in Wildlife Insights is open to anyone involved in recording vertebrate diversity through camera trap images. Prospective members are expected to have interests in:

Wildlife Insights has two membership categories:

You do not need to join Wildlife Insights in order to share in the benefits of the website. If you wish to provide camera trap data to the site, you will need to sign a Data Provider Agreement. If you wish to use camera trap data from the site for non-commercial purposes, you will need to sign a Data User Agreement.

Wildlife Insights has adopted a Dynamic Governance Model that promotes inclusive decision-making and contributions to the WI Core Purpose and Mission. Governance is distributed to every level of membership, with a Steering Committee serving as the highest governing body of WI. The Steering Committee includes one voting representative from each of the Core Member Institutions and two non-voting representatives from each of the Standing Committees. The Standing Committees provide guidance and recommendations to the Steering Committee, which reviews and approves work plans in pursuit of WI goals.

The four Standing Committees (Technology, Science and Analytics, Outreach, and Sustainability) ensure that Wildlife Insights strives to meet the needs of WI users and stakeholders by providing programmatic guidance on the development and implementation of the partnership and platform. Individuals from both Core and Associate Member Institutions are invited to serve on the Standing Committees, which are described below:

 

The Technology Committee leads and oversees the development of technology systems and tools that meet the needs of target stakeholders and groups.

 

The Science and Analytics Committee ensures that the WI platform supports the use and development of new cutting-edge statistics and recommends analytical and visualization approaches for addressing them.

 

The Outreach Committee advises on topics related to the recruitment, engagement and communications strategies of Wildlife Insights.

 

The Sustainability Committee provides guidance on the best approaches and strategies for maintaining long-term financial and operational stability.

Wildlife Insights will not display or enable the download of images of humans in the public database. However, a public record of the image (e.g., the date, time, identification) will document that a human was present and observed. There will be additional security measures in place to ensure all privacy needs are met including options for a user to delete images of humans from their project.

Data providers, including WI core and associate partners, share their data with Wildlife Insights in return for the use of a wide variety of tools and services available on the Wildlife Insights Platform. One of these services is the ability to generate identifiers for every dataset that is shared with Wildlife Insights. Identifiers and citations make it easy to publish data and receive recognition for your work. WI recommends that dataset citations include the following information: Author(s), Year, Dataset Title, Identifier, Data Repository, Version. It is up to the data provider and user to ensure appropriate attribution.

While Wildlife Insights is committed to open data sharing, we recognize that revealing the location for certain species may increase their risk of threat. For any sensitive species, Wildlife Insights will obfuscate the location so that the exact location cannot be determined from the data. For example, we will not reveal the sample locations of Endangered or CITES-listed species that are hunted for commercial purposes and which lack verifiable protection enforcement mechanisms. The list of sensitive species is defined and managed by WI based on best practices and may be updated from time to time. Detailed biodiversity sharing guidelines will be available soon and a list of sensitive species can be found here.

 

Wildlife Insights promotes sharing information for the benefit of biodiversity conservation. We recognize however, that data providers also want to publish. Wildlife Insights will provide the option to embargo data for a limited period of 24 months before the data is made public. Data providers may request additional extensions by contacting Wildlife Insights at [email protected]

For each project, data providers may choose to license data under Creative Commons licenses:

These licenses are described below:

By agreeing to the Terms of Use, you grant Wildlife Insights the right to use your data, including sensitive species data and embargoed data, for certain purposes including:

*All of the mentioned uses will be with attribution to you.

**Derived products may be produced by Wildlife Insights or a Wildlife Insights core partner. Wildlife Insights will not publish derived products that include your embargoed data in peer-reviewed publications without your consent.

Derived products are aggregations of data, summary statistics and information products including charts, maps or graphs. Wildlife Insights may produce derived products to provide the public with timely information that captures large-scale biodiversity trends. In order for these metrics to be relevant and effective, the inclusion of recent or even near-real time information is key. Wildlife Insights endeavors to support this need, while respecting the data privacy terms of your dataset and ensuring data attribution.

If your data is used, WI will provide attribution as required under license. Attribution may include using your organization name and logo.

Wildlife Insights permits data providers to embargo data for up to 24 months. Two extensions of up to 12 months each may be requested by sending an email to [email protected] Extension requests will be reviewed and approved by Wildlife Insights on a case by case basis. The embargo period is applied to an entire project, but is measured separately for each deployment (i.e., a unique placement of a camera in space and time).Embargoed data will not be available to users outside of your project. However, the metadata of any embargoed project will still be available in the public database.Note that by signing the Terms of Use, you provide Wildlife Insights and Wildlife Insights core partners permission to use your data, including embargoed data, to develop derived products, These derived products may be displayed on the Wildlife Insights website or used in presentations, for example, but will not be used in peer review publications without your consent.

You may remove unintended uploads if the removal is completed within 48 hours after data is uploaded to Wildlife Insights. After this brief period, you may only remove data from Wildlife Insights by sending a request to [email protected] Wildlife Insights administrators will review requests and grant approvals on a case by case basis.

If your account is deleted, your data will remain in the Wildlife Insights database. Your public data will remain accessible to other users and your embargoed data will remain embargoed through the end of the embargo period. If you are an organization administrator and delete your account, you will be prompted to assign another user to the administrator role.

We will have to provide you with ninety days’ notice of our intention to terminate the Service, third party sub-licensees working on improving computer vision models may still have indefinite access to your data but only for the purpose of advancing technology related to conservation and for no other reason.

Any data published on Wildlife Insights may be used by Wildlife Insights to develop aggregated data products, including global analyses. Wildlife Insights may use these analyses to produce annual reports on the state of wildlife.

Wildlife Insights encourages any progress to advance wildlife conservation. Wildlife Insights is the largest and most diverse collection of camera trap images that is open to the public. Wildlife Insights also provides unique tools that are not available anywhere else, including artificial intelligence models for species identification, automated statistics and visualizations. Wildlife Insights is the only system that provides all of these features in one place, making it easy for decision-makers to access the information they need to protect wildlife.

Most of the WIldlife Insights partners have been using camera traps to collect information on wildlife populations for years. Wildlife Insights grew out of a joint recognition that the data being collected individually could provide much more value to the conservation community if brought together, standardized and made openly available. The data in Wildlife Insights at the time of the first release is contributed by Wildlife Conservation Society, Smithsonian Conservation Biology Institute, North Carolina Museum of Natural Sciences, WWF, Conservation International and the Tropical Ecology and Monitoring (TEAM) Network. As users upload data into Wildlife Insights, the number of contributors and database will grow in size and representativeness, and the species identification AI will continue to improve.

Anyone can browse, discover, and download public data on the platform. After the first release, whitelisted users can use all of the features in Wildlife Insights to create projects and initiatives, upload images, identify species using artificial intelligence, view basic analytics and download their own data.If you have camera trap data to share and would like to use Wildlife Insights to streamline cataloguing of your data, contact [email protected] with your name, organization, and a description of your project(s) and dataset.Learn more about how Instituto Humboldt in Colombia has monitored its incredibly diverse wildlife in a changing political landscape with camera traps in this video.

All data from Wildlife Insights core partners will be shared with the public when the platform is released. However, some users may want to keep data private to comply with legal requirements or to publish research. Those users will be able to embargo data for a limited amount of time and images will eventually become public.

Wildlife Insights is committed to making the platform available to anyone who is working to advance wildlife conservation. The platform will be free of charge initially, and Wildlife Insights is exploring tiered services, from Basic (free) to Premium (subscription-based).

Membership in Wildlife Insights is open to anyone involved in recording vertebrate diversity through camera trap images. Prospective members are expected to have interests in:

Wildlife Insights has two membership categories:

You do not need to join Wildlife Insights in order to share in the benefits of the website. If you wish to provide camera trap data to the site, you will need to sign a Data Provider Agreement. If you wish to use camera trap data from the site for non-commercial purposes, you will need to sign a Data User Agreement.

Wildlife Insights has adopted a Dynamic Governance Model that promotes inclusive decision-making and contributions to the WI Core Purpose and Mission. Governance is distributed to every level of membership, with a Steering Committee serving as the highest governing body of WI. The Steering Committee includes one voting representative from each of the Core Member Institutions and two non-voting representatives from each of the Standing Committees. The Standing Committees provide guidance and recommendations to the Steering Committee, which reviews and approves work plans in pursuit of WI goals.

The four Standing Committees (Technology, Science and Analytics, Outreach, and Sustainability) ensure that Wildlife Insights strives to meet the needs of WI users and stakeholders by providing programmatic guidance on the development and implementation of the partnership and platform. Individuals from both Core and Associate Member Institutions are invited to serve on the Standing Committees, which are described below:

 

The Technology Committee leads and oversees the development of technology systems and tools that meet the needs of target stakeholders and groups.

 

The Science and Analytics Committee ensures that the WI platform supports the use and development of new cutting-edge statistics and recommends analytical and visualization approaches for addressing them.

 

The Outreach Committee advises on topics related to the recruitment, engagement and communications strategies of Wildlife Insights.

 

The Sustainability Committee provides guidance on the best approaches and strategies for maintaining long-term financial and operational stability.

AI models, developed by Google, have been trained on 8.7M images to automatically filter out blank images and identify 614 animal species in a fraction of a second. An expert can process anywhere from 300-1000 camera trap images per hour. If that task is sent to hundreds or thousands of machines in parallel using Google Cloud Platform, that task of processing camera trap images to find those containing animals is tens of times faster. This allows biologists to spend time on the animals of their interest, instead of sifting through thousands of empty images looking for animals.

The AI model is trained on images from Conservation International's Tropical Ecology and Monitoring (TEAM) Network, Snapshot Serengeti, Caltech Camera Traps, North American Camera Trap Images, WWF and One Tam, which include 614 species from around the world.

We are adding to this core training dataset frequently on an ongoing basis with data from Wildlife Insights core members: Conservation International, Smithsonian Institution, North Carolina Museum of Natural Sciences, Wildlife Conservation Society, ZSL, and WWF. We have also trained using openly available datasets on lila.science.

Our training data contains images labeled with WI taxonomy including Class > Order > Family > Genus > Species. Species, which is the most granular level, is used as a class label to train a multi-class image classifier.

We are using a deep convolutional neural net for multi-class classification using Google’s open source TensorFlow framework to train an AI model to identify animal species in camera trap images. 

Like humans, AI models generally get better at recognizing and identifying animals if they can look at hundreds or thousands of diverse images of that particular species. If you have camera trap data with many images of popular species or even a few images of rare species, we encourage you to contact us to get trusted tester access to Wildlife Insights, so that you can more easily manage and identify your camera trap images and contribute to the accuracy of Wildlife Insights AI models. 

 

Perhaps most importantly, AI can help identify images without animals where things like blowing grass can trigger a camera, which can be up to 80% of a dataset. Leveraging Google AI Platform Predictions, this functionality alone can dramatically reduce the amount of time spent processing and identifying camera trap data. 

The first task for AI has been to identify images not containing any animals, since no one wants to look at thousands of empty images. The AI models in Wildlife Insights catch 78.7% of blank images with an error rate of less than 2%. This allows Wildlife Insights users to minimize the amount of human involvement in the process of sifting through millions of images looking for wildlife, and allowing tens to hundreds of machines to do that work for them in a fraction of the time. Upload your images, put computer vision to work for you, and when the results come back you can focus on the images that need your attention. 

 

Across the 614 species that the models have been trained on classes like blue duiker, African elephant, southern pig-tailed macaque or a small antelope called Suni have between 80% and 98.6% probability of being correctly predicted by the AI models. 

Categorizing species in camera trap data can be very challenging, even for humans.  Data quality can play a huge part in our ability to correctly classify an image, and even human experts struggle with images that are poorly illuminated, blurry, or where the animal is very small, hidden behind vegetation, or far away.  There are also many sets of species that are easily confused, like bobcats and lynx. Images that have low data quality or contain easily-confused species are harder for both humans and AI.  That said, AI improves when given many, diverse examples of a given species.  

This is where you can help!  You can correct any mis-labeled species using the Wildlife Insights interface and improve the model accuracy for that species class. In addition, by adding your data to Wildlife Insights, you can help provide sufficient examples of each species to our AI systems so that the species can be accurately identified in the future. By uploading data from your unique camera traps you are not only improving species accuracy.  Each individual camera trap has a set of biases, such as the background, perspective on the animals, and lighting conditions. Your data is also increasing the camera diversity of our dataset, and in turn improving the robustness of our AI models to varied camera conditions.

Once you’ve uploaded your camera trap images to Wildlife Insights, you will see the per-image classification confidence displayed alongside the image in the Identify section of Wildlife Insights. 

You can also look up individual species to see if that species has examples in our current training dataset, and what the model performance is on that class. 

Search for a species of your interest. Let’s take an example of mule deer. If there were total 100 images of mule deer in the data you uploaded, we will be able to identify 81 of those as mule deer. This is the recall for the class. If we predict 100 images as mule deer, about 93 of them are likely to be actually mule deer and we may misclassify 7 of them as something else. That is the precision for the class.

If you see “Needs More Data” in the metrics, this means our AI model is not able to predict a single class with confidence above a fixed threshold (threshold is tuned manually). This may happen due to a low number of images of this species in training, due to less diversity in the images (e.g. all from same camera location, similar background etc.), or just because the characteristics to identify the species is shared between many species which confuses the model. You can help improve our model accuracy by contributing additional data to Wildlife Insights for your species of interest. As users upload more images - from different regions, more diverse species etc., our AI models will get better at recognizing more species.

If you do not see your species of interest listed it means that we currently have no examples of that species in our dataset. This is all the more reason to contribute, so we can continue to grow the number of supported species in Wildlife Insights.

Learn more about assessing classification accuracy for AI models in general.

Convolutional neural networks are a widely successful AI paradigm for computer vision models. At a high level, the model takes an image as a 2D input (array of pixels in single channel or RGB channels) and runs mathematical operations in a set of steps. Each step is referred to as a layer. There are some peculiar layer types used in CNNs for images, like convolution and pooling. Multiple such layers are collated together to form a deep convolutional neural network.

Some models have been trained using large amounts of generic image data that can be re-purposed by tuning them to specific problem (like species identification in camera trap images). We start with one such model called Inception-V4 and fine-tune the model for species classification in camera trap images using labeled data from Wildlife Insights.

Why fine-tune from a pre-trained model?

Fine-tuning is done to adapt the model to characteristics of camera trap images e.g. blurring, low lighting, etc. There are many common characteristics learned by the pre-trained model, like detecting edges of objects or identifying patterns like stripes and spots. These generic visual features are useful for identifying species in camera trap images, and by fine-tuning from a model that already has these capabilities, we are able to quickly leverage those features for our species classification task.

How is the model evaluated?

We believe it is important to evaluate our models in a method that is similar to how they will be used. We want to ensure that models will work well for new users, uploading data from camera locations unseen during training. In order to evaluate how well the model does on unseen data, we hold out the images from some of the camera locations in our dataset to serve as an unseen “test set”. We bin all of our dataset camera locations into 10x10meter lat/long grids, and then select a random set of these grid cells to serve as our test set. This ensures that we do not train and evaluate on similar images (e.g. same background), which may lead to incorrectly high accuracy numbers.

When a new image is uploaded, we do a forward pass over the trained network (i.e. run through all the layers one-by-one) and extract a probability distribution over all species classes. We then select the class with highest probability as the predicted class. We consider the probability of the highest class to represent the “confidence” of the model in its prediction1. In some cases, the model does not predict any of the classes with a high probability. When this occurs, we return “NO CV Result”, short for “No Computer Vision Result,” instead of returning a low-confidence species prediction. As the training dataset grows, our model will become more confident and return fewer “No CV Result” predictions.

Wildlife Insights is hosted on the Google Cloud Platform, and inferencing is done using Google AI Platform Predictions. Once the images are uploaded, depending on your network bandwidth, we are capable of handling hundreds of queries per second (qps) for online prediction, parallelized across hundreds to thousands of machines. On a single GPU, we can process about 18,000 images per hour which can be scaled further by running across hundreds of GPUs. For reference, a human expert can label 600 images per hour. The purpose of AI models is to assist human experts by freeing them of flipping through most of the images, leaving only a fraction for their expert opinion.

If you have images that you have already labelled, that you would like to contribute to improving our AI models, or images that you would like to upload to Wildlife Insights, contact us to work on ingesting your data.

When you upload your camera trap images to Wildlife Insights, our AI models run image classification in the background and in the “Identify” tab, you can see the results. Alongside the image we display the common name (Genus and species) as well as confidence.

We return the species label for an image only if we are relatively sure about our prediction. If the AI is not confident, we return "No CV result" for “No Computer Vision result.” This confidence is based on the score the model returns for the top-most predicted class, which is on the scale of 0-100%.

The chart below indicates how the model performs for specific species. For example: if you upload an image that contains a "Red Deer", then 95.30% of the times we will correctly identify the class, but 3.42% we may not be confident about our prediction, so we return no result. This may happen due to multiple reasons, for example, if we did not have enough diverse images (different backgrounds, different profiles of the animal, different lighting conditions etc.) for this species when training or we have multiple similar looking species that confuses our model.

Model Performance

Our AI models are still learning! If you see an error in classification, you can click “Edit Identification” and correct the species label. This helps our model get better and better at improving for that species class.

Users of Wildlife Insights have the capability to edit the suggested result from the Wildlife Insights AI models. When we see that enough new images have user-generated or edited classifications, we will retrain our models with this new data. If you have camera trap images, you can directly contribute to the improvement of Wildlife Insights’ AI models, to help accurately identify the animals you care about. Please contact us to become a Wildlife Insights trusted tester.

For mammals, Wildlife Insights uses the IUCN Red List of Endangered Species as the primary taxonomy. For birds, we use Birdlife International’s taxonomy. We also have several classes for non-animals, such as car, equestrian, domestic dog, etc.

Users are also able to add custom notes to image metadata for local or indigenous names of species.

Wildlife Insights and Google are focusing on developing a model that can accurately identify species but not individual animals. There are other groups that are successfully training computer vision algorithms to identify individual animals, and we hope to work together with those groups in the future to continue to expand the scope of Wildlife Insights.

The infrastructure for Wildlife Insights can support video and other types of sensor data, including acoustic data, but initially only provides support for camera trap images. The long term plan for Wildlife Insights is to support multiple sensor data types.

For our first release, we really want to hear from you if you already have camera trap data, whether you’ve already catalogued it and labelled species in images or not. Wildlife Insights is open to anyone involved in camera trapping to advance wildlife conservation. Camera trap data providers may sign up for an account to share data and anyone can browse the global database.

Please contact us if you’d like to get added to the trusted tester group, in order to share your data and run it through the Wildlife Insights AI models.

Users are able to upload images containing humans to Wildlife Insights, and they will be classified by AI models in the uploading process. However, images of humans will not be made public, nor are they downloadable. Some wildlife researchers are interested in studying human-wildlife interactions. In order to facilitate this, metadata for images containing humans (that does not contain any personally identifiable information) will be available to the public for download (please refer to our Terms of Use for more information).

Yes! Depending on what group you most align with.

The field of species classification on sensor-based data is really just beginning. There are a number of approaches we will explore to continue to improve the models….and, if we haven’t mentioned it yet, more data always helps!

Here are some techniques and experiments we may run to improve our results:

Wildlife Insights will not display or enable the download of images of humans in the public database. However, a public record of the image (e.g., the date, time, identification) will document that a human was present and observed. There will be additional security measures in place to ensure all privacy needs are met including options for a user to delete images of humans from their project.

 

All data from Wildlife Insights core partners will be shared with the public when the platform is released. However, some users may want to keep data private to comply with legal requirements or to publish research. Those users will be able to embargo data for a limited amount of time and images will eventually become public.

After the first release, anyone can download public data* from Wildlife Insights. Initially, all public metadata (including identifications and locations) can be downloaded. Downloads of public images will be made available at a later date. All data are licensed under Creative Commons and can be used according to the designated license.

*Public data will never include embargoed data, images of humans or the exact location of sensitive species.

While Wildlife Insights is committed to open data sharing, we recognize that revealing the location for certain species may increase their risk of threat. For any sensitive species, Wildlife Insights will obfuscate the location so that the exact location cannot be determined from the data. For example, we will not reveal the sample locations of Endangered or CITES-listed species that are hunted for commercial purposes and which lack verifiable protection enforcement mechanisms. The list of sensitive species is defined and managed by WI based on best practices and may be updated from time to time. Detailed biodiversity sharing guidelines will be available soon and a list of sensitive species can be found here.

Data providers, including WI core and associate partners, share their data with Wildlife Insights in return for the use of a wide variety of tools and services available on the Wildlife Insights Platform. One of these services is the ability to generate identifiers for every dataset that is shared with Wildlife Insights. Identifiers and citations make it easy to publish data and receive recognition for your work. WI recommends that dataset citations include the following information: Author(s), Year, Dataset Title, Identifier, Data Repository, Version. It is up to the data provider and user to ensure appropriate attribution.

If your data is used, WI will provide attribution as required under license. Attribution may include using your organization name and logo.

For each project, data providers may choose to license data under Creative Commons licenses:

These licenses are described below:

For each project, data providers may choose to license data under Creative Commons licenses:

These licenses are described below:

By agreeing to the Terms of Use, you grant Wildlife Insights the right to use your data, including sensitive species data and embargoed data, for certain purposes including:

*All of the mentioned uses will be with attribution to you.

**Derived products may be produced by Wildlife Insights or a Wildlife Insights core partner. Wildlife Insights will not publish derived products that include your embargoed data in peer-reviewed publications without your consent.

Wildlife Insights will not display or enable the download of images of humans in the public database. However, a public record of the image (e.g., the date, time, identification) will document that a human was present and observed. There will be additional security measures in place to ensure all privacy needs are met including options for a user to delete images of humans from their project.

Derived products are aggregations of data, summary statistics and information products including charts, maps or graphs. Wildlife Insights may produce derived products to provide the public with timely information that captures large-scale biodiversity trends. In order for these metrics to be relevant and effective, the inclusion of recent or even near-real time information is key. Wildlife Insights endeavors to support this need, while respecting the data privacy terms of your dataset and ensuring data attribution.

If your data is used, WI will provide attribution as required under license. Attribution may include using your organization name and logo.

Wildlife Insights permits data providers to embargo data for up to 24 months. Two extensions of up to 12 months each may be requested by sending an email to [email protected] Extension requests will be reviewed and approved by Wildlife Insights on a case by case basis. The embargo period is applied to an entire project, but is measured separately for each deployment (i.e., a unique placement of a camera in space and time).Embargoed data will not be available to users outside of your project. However, the metadata of any embargoed project will still be available in the public database.Note that by signing the Terms of Use, you provide Wildlife Insights and Wildlife Insights core partners permission to use your data, including embargoed data, to develop derived products, These derived products may be displayed on the Wildlife Insights website or used in presentations, for example, but will not be used in peer review publications without your consent.

You may remove unintended uploads if the removal is completed within 48 hours after data is uploaded to Wildlife Insights. After this brief period, you may only remove data from Wildlife Insights by sending a request to [email protected] Wildlife Insights administrators will review requests and grant approvals on a case by case basis.

If your account is deleted, your data will remain in the Wildlife Insights database. Your public data will remain accessible to other users and your embargoed data will remain embargoed through the end of the embargo period. If you are an organization administrator and delete your account, you will be prompted to assign another user to the administrator role.

We will have to provide you with ninety days’ notice of our intention to terminate the Service, third party sub-licensees working on improving computer vision models may still have indefinite access to your data but only for the purpose of advancing technology related to conservation and for no other reason.

Any data published on Wildlife Insights may be used by Wildlife Insights to develop aggregated data products, including global analyses. Wildlife Insights may use these analyses to produce annual reports on the state of wildlife.

While Wildlife Insights is committed to open data sharing, we recognize that revealing the location for certain species may increase their risk of threat. For any sensitive species, Wildlife Insights will obfuscate the location so that the exact location cannot be determined from the data. For example, we will not reveal the sample locations of Endangered or CITES-listed species that are hunted for commercial purposes and which lack verifiable protection enforcement mechanisms. The list of sensitive species is defined and managed by WI based on best practices and may be updated from time to time. Detailed biodiversity sharing guidelines will be available soon and a list of sensitive species can be found here. In all cases, data providers must provide raw, unedited data, including geographic location information, to Wildlife Insights. These data may be used for producing Wildlife Insights aggregated data products but these products will not expose sensitive data.

By agreeing to the Terms of Use, you grant Wildlife Insights the right to use your data, including sensitive species data and embargoed data, for certain purposes including:

*All of the mentioned uses will be with attribution to you.

**Derived products may be produced by Wildlife Insights or a Wildlife Insights core partner. Wildlife Insights will not publish derived products that include your embargoed data in peer-reviewed publications without your consent.

All data from Wildlife Insights core partners will be shared with the public when the platform is released. However, some users may want to keep data private to comply with legal requirements or to publish research. Those users will be able to embargo data for a limited amount of time and images will eventually become public.

Wildlife Insights promotes sharing information for the benefit of biodiversity conservation. We recognize however, that data providers also want to publish. Wildlife Insights will provide the option to embargo data for a limited period of 24 months before the data is made public. Data providers may request additional extensions by contacting Wildlife Insights at [email protected]

By agreeing to the Terms of Use, you grant Wildlife Insights the right to use your data, including sensitive species data and embargoed data, for certain purposes including:

*All of the mentioned uses will be with attribution to you.

**Derived products may be produced by Wildlife Insights or a Wildlife Insights core partner. Wildlife Insights will not publish derived products that include your embargoed data in peer-reviewed publications without your consent.

Derived products are aggregations of data, summary statistics and information products including charts, maps or graphs. Wildlife Insights may produce derived products to provide the public with timely information that captures large-scale biodiversity trends. In order for these metrics to be relevant and effective, the inclusion of recent or even near-real time information is key. Wildlife Insights endeavors to support this need, while respecting the data privacy terms of your dataset and ensuring data attribution.

After the first release, anyone can download public data* from Wildlife Insights. Initially, all public metadata (including identifications and locations) can be downloaded. Downloads of public images will be made available at a later date. All data are licensed under Creative Commons and can be used according to the designated license.

*Public data will never include embargoed data, images of humans or the exact location of sensitive species.