Wildlife Insights is the largest and most diverse collection of camera trap images that is open to the public. Wildlife Insights also provides unique tools, including artificial intelligence models for species identification, automated statistics and a cloud-based platform to easily share data. Wildlife Insights is the only system that provides all of these features in one place, making it easy for decision-makers to access the information they need to protect wildlife.
Most of the Wildlife Insights partners have been using camera traps to collect information on wildlife populations for years. Wildlife Insights grew out of a joint recognition that the data being collected individually could provide much more value to the conservation community if brought together, standardized and made openly available. As users upload data into Wildlife Insights, the number of contributors and database will grow in size and representativeness, and the species identification AI will continue to improve.
During our first release, members of our Trusted Tester program can use all of the features in Wildlife Insights to create projects and initiatives, upload images, identify species using artificial intelligence, view basic analytics and download their own data. If you have camera trap data to share and would like to apply to become a Trusted Tester, please provide us with more details about your project(s). Note that your account and password will not be recognized until your account is approved by a Wildlife Insights administrator.
Anyone can visit the Wildlife Insights Explore page to browse projects and discover select camera trap images.
In the future, the public will also be able to download data from the Wildlife Insights Explore page. Data accessible for public download on the Explore page is only made available after certain restrictions are implemented to protect sensitive species and other privacy concerns. These measures include:
- obfuscating (blurring) the exact location of any deployment in an effort to limit access to sensitive species;
- removing all images of humans from public pages and downloads;
- and limiting access to embargoed data (i.e., images and deployment information). Public users may view details associated with an embargoed project such as the project name, objectives and organization name but will not be able to download data from the embargoed project.
Learn more about how Instituto Humboldt in Colombia has monitored its incredibly diverse wildlife in a changing political landscape with camera traps in this video.
AI models, developed by Google, have been trained on 11.6M images to automatically filter out blank images and identify 732 animal species in a fraction of a second. An expert can process anywhere from 300-1000 camera trap images per hour. If that task is sent to hundreds or thousands of machines in parallel using Google Cloud Platform, that task of processing camera trap images to find those containing animals is thousands of times faster. This allows biologists to spend time on the animals of their interest, instead of sifting through thousands of empty images looking for animals.
We are using a deep convolutional neural net for multi-class classification using Google’s open source TensorFlow framework to train an AI model to identify animal species in camera trap images.
Like humans, AI models generally get better at recognizing and identifying animals if they can look at hundreds or thousands of diverse images of that particular species. If you have camera trap data with many images of popular species or even a few images of rare species, we encourage you to contact us to get trusted tester access to Wildlife Insights, so that you can more easily manage and identify your camera trap images and contribute to the accuracy of Wildlife Insights AI models.
">
The AI model is trained on images from Conservation International's Tropical Ecology and Monitoring (TEAM) Network, Snapshot Serengeti, Caltech Camera Traps, North American Camera Trap Images, WWF and One Tam, which include 837 classes with 732 species from around the world.
We are adding to this core training dataset frequently on an ongoing basis with data from Wildlife Insights core members: Conservation International, Smithsonian Institution, North Carolina Museum of Natural Sciences, Wildlife Conservation Society, ZSL, and WWF. We have also trained using openly available datasets on lila.science.
Our training data contains images labeled with WI taxonomy including Class > Order > Family > Genus > Species. Species, which is the most granular level, is used as a class label to train a multi-class image classifier.
We are using a deep convolutional neural net for multi-class classification using Google’s open source TensorFlow framework to train an AI model to identify animal species in camera trap images.
Like humans, AI models generally get better at recognizing and identifying animals if they can look at hundreds or thousands of diverse images of that particular species. If you have camera trap data with many images of popular species or even a few images of rare species, we encourage you to contact us to get trusted tester access to Wildlife Insights, so that you can more easily manage and identify your camera trap images and contribute to the accuracy of Wildlife Insights AI models.
Perhaps most importantly, AI can help identify images without animals where things like blowing grass can trigger a camera, which can be up to 80% of a dataset. Leveraging Google AI Platform Predictions, this functionality alone can dramatically reduce the amount of time spent processing and identifying camera trap data.
The first task for the AI models has been to identify images not containing any animals, since no one wants to look at thousands of empty images. The AI models in Wildlife Insights catch 47% of blank images with an error rate of less than 2.5%. This allows Wildlife Insights users to minimize the amount of human involvement in the process of sifting through millions of images looking for wildlife, and allowing tens to hundreds of machines to do that work for them in a fraction of the time. Upload your images, put computer vision to work for you, and when the results come back you can focus on the images that need your attention.
Across the 837 classes and 732 species that the models have been trained on classes like blue duiker, African elephant, southern pig-tailed macaque or a small antelope called Suni have between 80% and 98.6% probability of being correctly predicted by the AI models.
Categorizing species in camera trap data can be very challenging, even for humans. Data quality can play a huge part in our ability to correctly classify an image, and even human experts struggle with images that are poorly illuminated, blurry, or where the animal is very small, hidden behind vegetation, or far away. There are also many sets of species that are easily confused, like bobcats and lynx. Images that have low data quality or contain easily-confused species are harder for both humans and AI. That said, AI improves when given many, diverse examples of a given species.
This is where you can help! You can correct any mis-labeled species using the Wildlife Insights interface and improve the model accuracy for that species class. In addition, by adding your data to Wildlife Insights, you can help provide sufficient examples of each species to our AI systems so that the species can be accurately identified in the future. By uploading data from your unique camera traps you are not only improving species accuracy. Each individual camera trap has a set of biases, such as the background, perspective on the animals, and lighting conditions. Your data is also increasing the camera diversity of our dataset, and in turn improving the robustness of our AI models to varied camera conditions.
Once you’ve uploaded your camera trap images to Wildlife Insights, you will see the per-image classification confidence displayed alongside the image in the Identify section of Wildlife Insights.
You can also look up individual species to see if that species has examples in our current training dataset, and what the model performance is on that class.
Search for a species of your interest. Let’s take an example of mule deer. If there were total 100 images of mule deer in the data you uploaded, we will be able to identify 81 of those as mule deer. This is the recall for the class. f we predict 100 images as mule deer, about 93 of them are likely to actually be mule deer and the other 7 are likely something else. That is the precision for the class.
If you see “Needs More Data” in the metrics, this means our AI model is not able to predict a single class with confidence above a fixed threshold (threshold is tuned manually). This may happen due to a low number of images of this species in training, due to less diversity in the images (e.g. all from same camera location, similar background etc.), or just because the characteristics to identify the species is shared between many species which confuses the model. You can help improve our model accuracy by contributing additional data to Wildlife Insights for your species of interest. As users upload more images - from different regions, more diverse species etc., our AI models will get better at recognizing more species.
If you do not see your species of interest listed it means that we currently have no examples of that species in our dataset. This is all the more reason to contribute, so we can continue to grow the number of supported species in Wildlife Insights.
Learn more about assessing classification accuracy for AI models in general.
Convolutional neural networks are a widely successful AI paradigm for computer vision models. At a high level, the model takes an image as a 2D input (array of pixels in single channel or RGB channels) and runs mathematical operations in a set of steps. Each step is referred to as a layer. There are some peculiar layer types used in CNNs for images, like convolution and pooling. Multiple such layers are collated together to form a deep convolutional neural network.
Some models have been trained using large amounts of generic image data that can be re-purposed by tuning them to specific problem (like species identification in camera trap images). We start with one such model called Inception-V4 and fine-tune the model for species classification in camera trap images using labeled data from Wildlife Insights.
Why fine-tune from a pre-trained model?
Fine-tuning is done to adapt the model to characteristics of camera trap images e.g. blurring, low lighting, etc. There are many common characteristics learned by the pre-trained model, like detecting edges of objects or identifying patterns like stripes and spots. These generic visual features are useful for identifying species in camera trap images, and by fine-tuning from a model that already has these capabilities, we are able to quickly leverage those features for our species classification task.
How is the model evaluated?
We believe it is important to evaluate our models in a method that is similar to how they will be used. We want to ensure that models will work well for new users, uploading data from camera locations unseen during training. In order to evaluate how well the model does on unseen data, we hold out the images from some of the camera locations in our dataset to serve as an unseen “test set”. We bin all of our dataset camera locations into 10x10meter lat/long grids, and then select a random set of these grid cells to serve as our test set. This ensures that we do not train and evaluate on similar images (e.g. same background), which may lead to incorrectly high accuracy numbers.
When a new image is uploaded, we do a forward pass over the trained network (i.e. run through all the layers one-by-one) and extract a probability distribution over all species classes. We then select the class with highest probability as the predicted class. We consider the probability of the highest class to represent the “confidence” of the model in its prediction1. In some cases, the model does not predict any of the classes with a high probability. When this occurs, we return “NO CV Result”, short for “No Computer Vision Result,” instead of returning a low-confidence species prediction. As the training dataset grows, our model will become more confident and return fewer “No CV Result” predictions.
During training, our AI models learn to recognize the unique characteristics of different species, such as patterns, textures, colors, etc. Using integrated gradients, we can visualize which parts of the image are most important to the model when making a species prediction. See examples of images and the associated integrated gradients visualizations below.


Wildlife Insights is hosted on the Google Cloud Platform, and inferencing is done using Google AI Platform Predictions. Once the images are uploaded, depending on your network bandwidth, we are capable of handling hundreds of queries per second (qps) for online prediction, parallelized across hundreds to thousands of machines. On a single GPU, we can process about 18,000 images per hour which can be scaled further by running across hundreds of GPUs. For reference, a human expert can label 600 images per hour. The purpose of AI models is to assist human experts by freeing them of flipping through most of the images, leaving only a fraction for their expert opinion.
If you have images that you have already labelled, that you would like to contribute to improving our AI models, or images that you would like to upload to Wildlife Insights, contact us to work on ingesting your data.
When you upload your camera trap images to Wildlife Insights, our AI models run image classification in the background and in the “Identify” tab, you can see the results. Alongside the image we display the common name (Genus and species) as well as confidence.
We return the species label for an image only if we are relatively sure about our prediction. If the AI is not confident, we return "No CV result" for “No Computer Vision result.” This confidence is based on the score the model returns for the top-most predicted class, which is on the scale of 0-100%.
The chart below indicates how the model performs for specific species. For example: if you upload an image that contains a "Red Deer", then 95.30% of the times we will correctly identify the class, but 3.42% we may not be confident about our prediction, so we return no result. This may happen due to multiple reasons, for example, if we did not have enough diverse images (different backgrounds, different profiles of the animal, different lighting conditions etc.) for this species when training or we have multiple similar looking species that confuses our model.

Our AI models are still learning! If you see an error in classification, you can click “Edit Identification” and correct the species label. This helps our model get better and better at improving for that species class.
Users of Wildlife Insights have the capability to edit the suggested result from the Wildlife Insights AI models. When we see that enough new images have user-generated or edited classifications, we will retrain our models with this new data. If you have camera trap images, you can directly contribute to the improvement of Wildlife Insights’ AI models, to help accurately identify the animals you care about. Please contact us to become a Wildlife Insights trusted tester.
For mammals, Wildlife Insights uses the IUCN Red List of Endangered Species as the primary taxonomy. For birds, we use Birdlife International’s taxonomy. We also have several classes for non-animals, such as car, equestrian, domestic dog, etc.
Users are also able to add custom notes to image metadata for local or indigenous names of species.
Wildlife Insights and Google are focusing on developing a model that can accurately identify species but not individual animals. There are other groups that are successfully training computer vision algorithms to identify individual animals, and we hope to work together with those groups in the future to continue to expand the scope of Wildlife Insights.
The infrastructure for Wildlife Insights can support video and other types of sensor data, including acoustic data, but initially only provides support for camera trap images. The long term plan for Wildlife Insights is to support multiple sensor data types.
For our first release, we really want to hear from you if you already have camera trap data, whether you’ve already catalogued it and labelled species in images or not. Wildlife Insights is open to anyone involved in camera trapping to advance wildlife conservation. Camera trap data providers may sign up for an account to share data and anyone can browse the global database.
Please contact us if you’d like to get added to the trusted tester group, in order to share your data and run it through the Wildlife Insights AI models.
Users are able to upload images containing humans to Wildlife Insights, and they will be classified by AI models in the uploading process. However, images of humans will not be made public, nor are they downloadable. Some wildlife researchers are interested in studying human-wildlife interactions. In order to facilitate this, metadata for images containing humans (that does not contain any personally identifiable information) will be available to the public for download (please refer to our Terms of Use for more information).
Yes! Depending on what group you most align with.
- If you are a wildlife expert, you can contribute by helping us refine our species metadata to understand the behavioral patterns of certain species. You can also help to improve our models, but giving feedback on our identifications if you find some mistakes.
- If you are computer scientist, we would be delighted to have collaborations on various research angles in computer vision and AI that are relevant in this domain. If you’re interested in contributing or being part of our discussions, you can email [email protected] and request to be invited to the AI for Conservation slack channel (https://aiforconservation.slack.com), where we have a #wildlifeinsights discussion thread. We have listed a few interesting research directions we have been exploring in2
- If you are a nature enthusiast, please explore the Discover page to see camera trap data from all around the world and learn about the amazing world of wildlife and biodiversity.
The field of species classification on sensor-based data is really just beginning. There are a number of approaches we will explore to continue to improve the models….and, if we haven’t mentioned it yet, more data always helps!
Here are some techniques and experiments we may run to improve our results:
- Leveraging the hierarchical structure of the taxonomy
- Including spatio-temporal information in training and/or when predicting
- Combining different types of AI models (e.g. bounding box detectors) to enable the classifier to focus on areas of interest within the image
- Leveraging sequential information inherent in camera trap images that appear in bursts
Wildlife Insights encourages users to share their data publicly but also recognizes that data providers may also want to publish their data. Wildlife Insights will provide the option to embargo data for 24 months before the data is made public. Data providers may request an additional 24 month embargo (for a total of 48 months) by contacting Wildlife Insights at [email protected] Embargoed data will not be available to the public for the duration of the embargo, but project metadata (e.g., project name, objectives) may be shared with the public.
Other users may need to keep data private in order to comply with legal requirements. If you are restricted from sharing data publicly, please contact [email protected] with details of your project and sharing requirements.
Note that by signing the Terms of Use, you provide Wildlife Insights and Wildlife Insights core partners permission to use your data, including embargoed data, to develop derived products, These derived products may be displayed on the Wildlife Insights website or used in presentations, for example, but will not be used in peer review publications without your consent.
Public downloads of Wildlife Insights data will be available in an upcoming release of the platform. Data available to the public will never include the exact location of sensitive species, images of humans or embargoed data.
Anyone who downloads data from Wildlife Insights must agree to the Terms of Use and provide their contact information and intended use of the data. The Terms of Use allow a user to share data and images in accordance with certain Creative Commons licenses.
Any data published on Wildlife Insights may be used by Wildlife Insights to develop aggregated data products, including global analyses. Wildlife Insights may use these analyses to produce annual reports on the state of wildlife.
Wildlife Insights is committed to making the platform available to anyone who is working to advance wildlife conservation. The platform will be free of charge initially, and Wildlife Insights is exploring tiered services, from Basic (free) to Premium (subscription-based).
Membership in Wildlife Insights is open to anyone involved in recording vertebrate diversity through camera trap images. Prospective members are expected to have interests in:
- Conserving and understanding the ecology and distribution of vertebrate species
- Interacting with other Members; and
- Contributing to the attainment of WI's goals.
Wildlife Insights has two membership categories:
- Core Members: Institutions who are actively engaged in carrying out the activities to develop Wildlife Insights.
- Associate Members: Individuals or institutions who support the goals of Wildlife Insights by participating in a WI Working Group and/or by providing expertise to Wildlife Insights.
You do not need to join Wildlife Insights in order to share in the benefits of the website. If you wish to provide camera trap data to the site, you will need to sign a Data Provider Agreement. If you wish to use camera trap data from the site for non-commercial purposes, you will need to sign a Data User Agreement.
Wildlife Insights has adopted a Dynamic Governance Model that promotes inclusive decision-making and contributions to the WI Core Purpose and Mission. Governance is distributed to every level of membership, with a Steering Committee serving as the highest governing body of WI. The Steering Committee includes one voting representative from each of the Core Member Institutions and two non-voting representatives from each of the Standing Committees. The Standing Committees provide guidance and recommendations to the Steering Committee, which reviews and approves work plans in pursuit of WI goals.
The four Standing Committees (Technology, Science and Analytics, Outreach, and Sustainability) ensure that Wildlife Insights strives to meet the needs of WI users and stakeholders by providing programmatic guidance on the development and implementation of the partnership and platform. Individuals from both Core and Associate Member Institutions are invited to serve on the Standing Committees, which are described below:
The Technology Committee leads and oversees the development of technology systems and tools that meet the needs of target stakeholders and groups.
The Science and Analytics Committee ensures that the WI platform supports the use and development of new cutting-edge statistics and recommends analytical and visualization approaches for addressing them.
The Outreach Committee advises on topics related to the recruitment, engagement and communications strategies of Wildlife Insights.
The Sustainability Committee provides guidance on the best approaches and strategies for maintaining long-term financial and operational stability.
Wildlife Insights will not knowingly display or enable the download of images of humans in the public database. However, a record of the image (i.e., the date, time, identification) will be available for download*. Within a user’s private workspace, images of humans may be stored, hidden or deleted by the user.
*Public downloads are not yet available in Wildlife Insights
Wildlife Insights follows Creative Commons standards for licensing, which provide guidelines for how data should be shared and distributed.
For each project you create you will be prompted to assign Creative Commons licenses separately to the metadata and images. When your data is downloaded, you will receive a notification of the download with the user’s contact information and objectives for using the data. The user will also be provided with the Creative Commons license assigned to your data. It is up to the data user and provider to ensure the data is appropriately attributed.
*In future releases, each project in Wildlife Insights will also be associated with a permanent and unique identifier (i.e., a DOI) that can be referenced via a url. The identifier makes it easy for others to cite your work and acknowledge your contributions.
While Wildlife Insights is committed to open data sharing, we recognize that revealing the location for certain species may increase their risk of threat. To protect the location of sensitive species, Wildlife Insights will obfuscate, or blur, the location information of all deployments made available for public download* so that the exact location of a deployment containing sensitive species cannot be determined from the data. Practices to obfuscate the location information associated with sensitive species may be updated from time to time with feedback from the community.
*Public downloads are not yet available in Wildlife Insights
Wildlife Insights promotes sharing information for the benefit of biodiversity conservation. We recognize however, that data providers also want to publish. Wildlife Insights will provide the option to embargo data for a limited period of 24 months before the data is made public. Data providers may request additional extensions by contacting Wildlife Insights at [email protected]
Creative Commons provides standardized licenses that make it easier for people to choose how their work is shared. For each project, data providers may choose to license data under Creative Commons licenses:
- Images (recorded data) may be licensed under CC0, CC BY or CC BY-NC.
- Metadata may be licensed under either CC 0 or CC BY.
These licenses are described below:
- Creative Commons Zero (CC0) permits a user to share, adapt and modify the work, even for commercial purposes, without asking permission (summary, full legal text) -
- Creative Commons Attribution 4.0 (CC BY 4.0), which permits a data user to share and adapt material with appropriate attribution, including for commercial purposes (summary, full legal text)
- Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0), which permits a data user to share and adapt material with appropriate attribution, only for noncommercial purposes (summary, full legal text)
By agreeing to the Terms of Use, you grant Wildlife Insights the right to use your data* for certain purposes including:
- The aggregation of wildlife data (i.e., summaries of data at a biome, national, global, or species level);
- The promotion of standardized protocols and best practices;
- The management of data and the provision of indicators;
- The generation of insights and visualizations for conservation action;
- Posting on Wildlife Insights social media accounts**;
- Creating publicity materials for Wildlife Insights**;
- Developing or improving computer vision models only for the purpose of advancing technology related to conservation;
- Developing derived products** for use in online forums, presentations, and peer-reviewed publications***.
*Including data on sensitive species, provided the use does not expose geographic location data.
**All of the mentioned uses will be with attribution to you.
**Derived products may be produced by Wildlife Insights or a Wildlife Insights core partner. Wildlife Insights will not publish derived products that include your embargoed data in peer-reviewed publications without your consent.
Derived products are aggregations of data, summary statistics and information products including charts, maps or graphs. Wildlife Insights may produce derived products to provide the public with timely information that captures large-scale biodiversity trends. In order for these metrics to be relevant and effective, the inclusion of recent or even near-real time information is key. Wildlife Insights endeavors to support this need, while respecting the data privacy terms of your dataset and ensuring data attribution.
If your data is used, WI will provide attribution as required by the Creative Commons license assigned to the data. Attribution may include using your organization name and logo.
Wildlife Insights permits data providers to embargo data for up to 24 months. Two extensions of up to 12 months each may be requested by sending an email to [email protected] Extension requests will be reviewed and approved by Wildlife Insights on a case by case basis. The embargo period is applied to an entire project, but is measured separately for each deployment (i.e., a unique placement of a camera in space and time). Embargoed data will not be available to users outside of your project. However, the metadata of any embargoed project will still be available in the public database. Note that by signing the Terms of Use, you provide Wildlife Insights and Wildlife Insights core partners permission to use your data, including embargoed data, to develop derived products. These derived products may be displayed on the Wildlife Insights website or used in presentations, for example, but will not be used in peer review publications without your consent.
Wildlife Insights runs on the Google Cloud Platform, which implements rigorous security practices to protect against unauthorized access. Click on the following links to learn more about Google Cloud security:
- https://cloud.google.com/security/infrastructure/
- https://cloud.google.com/security/overview/
- https://cloud.google.com/security/overview/whitepaper
In addition to the Wildlife Insights security measures provided by the Google Cloud Platform, the Wildlife Insights application provides additional security and protection via HTTPS. HTTPS (Hypertext Transfer Protocol Secure) is an internet communication protocol that protects the integrity and confidentiality of data between the user's computer and the site. Data sent using HTTPS is secured via Transport Layer Security protocol (TLS), which provides three key layers of protection:
- Encryption—encrypting the exchanged data to keep it secure from eavesdroppers. That means that while the user is browsing a website, nobody can "listen" to their conversations, track their activities across multiple pages, or steal their information.
- Data integrity—data cannot be modified or corrupted during transfer, intentionally or otherwise, without being detected.
- Authentication—proves that your users communicate with the intended website. It protects against man-in-the-middle attacks and builds user trust, which translates into other business benefits.
You may remove unintended uploads if the removal is completed within 48 hours after data is uploaded to Wildlife Insights. After this brief period, you may only remove data from Wildlife Insights by sending a request to [email protected] Wildlife Insights administrators will review requests and grant approvals on a case by case basis.
If your account is deleted, your data will remain in the Wildlife Insights database. Your public data will remain accessible to other users and your embargoed data will remain embargoed through the end of the embargo period. If you are an organization administrator and delete your account, you will be prompted to assign another user to the administrator role.
We will have to provide you with ninety days’ notice of our intention to terminate the Service, third party sub-licensees working on improving computer vision models may still have indefinite access to your data but only for the purpose of advancing technology related to conservation and for no other reason.
Any data published on Wildlife Insights may be used by Wildlife Insights to develop aggregated data products, including global analyses. Wildlife Insights may use these analyses to produce annual reports on the state of wildlife.
Wildlife Insights is the largest and most diverse collection of camera trap images that is open to the public. Wildlife Insights also provides unique tools, including artificial intelligence models for species identification, automated statistics and a cloud-based platform to easily share data. Wildlife Insights is the only system that provides all of these features in one place, making it easy for decision-makers to access the information they need to protect wildlife.
Most of the WIldlife Insights partners have been using camera traps to collect information on wildlife populations for years. Wildlife Insights grew out of a joint recognition that the data being collected individually could provide much more value to the conservation community if brought together, standardized and made openly available. The data in Wildlife Insights at the time of the first release is contributed by Wildlife Conservation Society, WWF, Conservation International and the Tropical Ecology and Monitoring (TEAM) Network. As users upload data into Wildlife Insights, the number of contributors and database will grow in size and representativeness, and the species identification AI will continue to improve.
During our first release, members of our Trusted Tester program can use all of the features in Wildlife Insights to create projects and initiatives, upload images, identify species using artificial intelligence, view basic analytics and download their own data. If you have camera trap data to share and would like to apply to become a Trusted Tester, please provide us with more details about your project(s). Note that your account and password will not be recognized until your account is approved by a Wildlife Insights administrator.
Anyone can visit the Wildlife Insights Explore page to browse projects and discover select camera trap images.
In the future, the public will also be able to download data from the Wildlife Insights Explore page. Data accessible for public download on the Explore page is only made available after certain restrictions are implemented to protect sensitive species and other privacy concerns. These measures include:
- obfuscating (blurring) the exact location of any deployment in an effort to limit access to sensitive species;
- removing all images of humans from public pages and downloads;
- and limiting access to embargoed data (i.e., images and deployment information). Public users may view details associated with an embargoed project such as the project name, objectives and organization name but will not be able to download data from the embargoed project.
Learn more about how Instituto Humboldt in Colombia has monitored its incredibly diverse wildlife in a changing political landscape with camera traps in this video.
Wildlife Insights encourages users to share their data publicly but also recognizes that data providers may also want to publish their data. Wildlife Insights will provide the option to embargo data for 24 months before the data is made public. Data providers may request an additional 24 month embargo (for a total of 48 months) by contacting Wildlife Insights at [email protected] Embargoed data will not be available to the public for the duration of the embargo, but project metadata (e.g., project name, objectives) may be shared with the public.
Other users may need to keep data private in order to comply with legal requirements. If you are restricted from sharing data publicly, please contact [email protected] with details of your project and sharing requirements.
Note that by signing the Terms of Use, you provide Wildlife Insights and Wildlife Insights core partners permission to use your data, including embargoed data, to develop derived products, These derived products may be displayed on the Wildlife Insights website or used in presentations, for example, but will not be used in peer review publications without your consent.
Wildlife Insights is committed to making the platform available to anyone who is working to advance wildlife conservation. The platform will be free of charge initially, and Wildlife Insights is exploring tiered services, from Basic (free) to Premium (subscription-based).
Membership in Wildlife Insights is open to anyone involved in recording vertebrate diversity through camera trap images. Prospective members are expected to have interests in:
- Conserving and understanding the ecology and distribution of vertebrate species
- Interacting with other Members; and
- Contributing to the attainment of WI's goals.
Wildlife Insights has two membership categories:
- Core Members: Institutions who are actively engaged in carrying out the activities to develop Wildlife Insights.
- Associate Members: Individuals or institutions who support the goals of Wildlife Insights by participating in a WI Working Group and/or by providing expertise to Wildlife Insights.
You do not need to join Wildlife Insights in order to share in the benefits of the website. If you wish to provide camera trap data to the site, you will need to sign a Data Provider Agreement. If you wish to use camera trap data from the site for non-commercial purposes, you will need to sign a Data User Agreement.
Wildlife Insights has adopted a Dynamic Governance Model that promotes inclusive decision-making and contributions to the WI Core Purpose and Mission. Governance is distributed to every level of membership, with a Steering Committee serving as the highest governing body of WI. The Steering Committee includes one voting representative from each of the Core Member Institutions and two non-voting representatives from each of the Standing Committees. The Standing Committees provide guidance and recommendations to the Steering Committee, which reviews and approves work plans in pursuit of WI goals.
The four Standing Committees (Technology, Science and Analytics, Outreach, and Sustainability) ensure that Wildlife Insights strives to meet the needs of WI users and stakeholders by providing programmatic guidance on the development and implementation of the partnership and platform. Individuals from both Core and Associate Member Institutions are invited to serve on the Standing Committees, which are described below:
The Technology Committee leads and oversees the development of technology systems and tools that meet the needs of target stakeholders and groups.
The Science and Analytics Committee ensures that the WI platform supports the use and development of new cutting-edge statistics and recommends analytical and visualization approaches for addressing them.
The Outreach Committee advises on topics related to the recruitment, engagement and communications strategies of Wildlife Insights.
The Sustainability Committee provides guidance on the best approaches and strategies for maintaining long-term financial and operational stability.
Wildlife Insights runs on the Google Cloud Platform, which implements rigorous security practices to protect against unauthorized access. Click on the following links to learn more about Google Cloud security:
- https://cloud.google.com/security/infrastructure/
- https://cloud.google.com/security/overview/
- https://cloud.google.com/security/overview/whitepaper
In addition to the Wildlife Insights security measures provided by the Google Cloud Platform, the Wildlife Insights application provides additional security and protection via HTTPS. HTTPS (Hypertext Transfer Protocol Secure) is an internet communication protocol that protects the integrity and confidentiality of data between the user's computer and the site. Data sent using HTTPS is secured via Transport Layer Security protocol (TLS), which provides three key layers of protection:
- Encryption—encrypting the exchanged data to keep it secure from eavesdroppers. That means that while the user is browsing a website, nobody can "listen" to their conversations, track their activities across multiple pages, or steal their information.
- Data integrity—data cannot be modified or corrupted during transfer, intentionally or otherwise, without being detected.
- Authentication—proves that your users communicate with the intended website. It protects against man-in-the-middle attacks and builds user trust, which translates into other business benefits.
AI models, developed by Google, have been trained on 11.6M images to automatically filter out blank images and identify 732 animal species in a fraction of a second. An expert can process anywhere from 300-1000 camera trap images per hour. If that task is sent to hundreds or thousands of machines in parallel using Google Cloud Platform, that task of processing camera trap images to find those containing animals is thousands of times faster. This allows biologists to spend time on the animals of their interest, instead of sifting through thousands of empty images looking for animals.
We are using a deep convolutional neural net for multi-class classification using Google’s open source TensorFlow framework to train an AI model to identify animal species in camera trap images.
Like humans, AI models generally get better at recognizing and identifying animals if they can look at hundreds or thousands of diverse images of that particular species. If you have camera trap data with many images of popular species or even a few images of rare species, we encourage you to contact us to get trusted tester access to Wildlife Insights, so that you can more easily manage and identify your camera trap images and contribute to the accuracy of Wildlife Insights AI models.
">
The AI model is trained on images from Conservation International's Tropical Ecology and Monitoring (TEAM) Network, Snapshot Serengeti, Caltech Camera Traps, North American Camera Trap Images, WWF and One Tam, which include 837 classes with 732 species from around the world.
We are adding to this core training dataset frequently on an ongoing basis with data from Wildlife Insights core members: Conservation International, Smithsonian Institution, North Carolina Museum of Natural Sciences, Wildlife Conservation Society, ZSL, and WWF. We have also trained using openly available datasets on lila.science.
Our training data contains images labeled with WI taxonomy including Class > Order > Family > Genus > Species. Species, which is the most granular level, is used as a class label to train a multi-class image classifier.
We are using a deep convolutional neural net for multi-class classification using Google’s open source TensorFlow framework to train an AI model to identify animal species in camera trap images.
Like humans, AI models generally get better at recognizing and identifying animals if they can look at hundreds or thousands of diverse images of that particular species. If you have camera trap data with many images of popular species or even a few images of rare species, we encourage you to contact us to get trusted tester access to Wildlife Insights, so that you can more easily manage and identify your camera trap images and contribute to the accuracy of Wildlife Insights AI models.
Perhaps most importantly, AI can help identify images without animals where things like blowing grass can trigger a camera, which can be up to 80% of a dataset. Leveraging Google AI Platform Predictions, this functionality alone can dramatically reduce the amount of time spent processing and identifying camera trap data.
The first task for the AI models has been to identify images not containing any animals, since no one wants to look at thousands of empty images. The AI models in Wildlife Insights catch 47% of blank images with an error rate of less than 2.5%. This allows Wildlife Insights users to minimize the amount of human involvement in the process of sifting through millions of images looking for wildlife, and allowing tens to hundreds of machines to do that work for them in a fraction of the time. Upload your images, put computer vision to work for you, and when the results come back you can focus on the images that need your attention.
Across the 837 classes and 732 species that the models have been trained on classes like blue duiker, African elephant, southern pig-tailed macaque or a small antelope called Suni have between 80% and 98.6% probability of being correctly predicted by the AI models.
Categorizing species in camera trap data can be very challenging, even for humans. Data quality can play a huge part in our ability to correctly classify an image, and even human experts struggle with images that are poorly illuminated, blurry, or where the animal is very small, hidden behind vegetation, or far away. There are also many sets of species that are easily confused, like bobcats and lynx. Images that have low data quality or contain easily-confused species are harder for both humans and AI. That said, AI improves when given many, diverse examples of a given species.
This is where you can help! You can correct any mis-labeled species using the Wildlife Insights interface and improve the model accuracy for that species class. In addition, by adding your data to Wildlife Insights, you can help provide sufficient examples of each species to our AI systems so that the species can be accurately identified in the future. By uploading data from your unique camera traps you are not only improving species accuracy. Each individual camera trap has a set of biases, such as the background, perspective on the animals, and lighting conditions. Your data is also increasing the camera diversity of our dataset, and in turn improving the robustness of our AI models to varied camera conditions.
Once you’ve uploaded your camera trap images to Wildlife Insights, you will see the per-image classification confidence displayed alongside the image in the Identify section of Wildlife Insights.
You can also look up individual species to see if that species has examples in our current training dataset, and what the model performance is on that class.
Search for a species of your interest. Let’s take an example of mule deer. If there were total 100 images of mule deer in the data you uploaded, we will be able to identify 81 of those as mule deer. This is the recall for the class. If we predict 100 images as mule deer, about 93 of them are likely to be actually mule deer and we may misclassify 7 of them as something else. That is the precision for the class.
If you see “Needs More Data” in the metrics, this means our AI model is not able to predict a single class with confidence above a fixed threshold (threshold is tuned manually). This may happen due to a low number of images of this species in training, due to less diversity in the images (e.g. all from same camera location, similar background etc.), or just because the characteristics to identify the species is shared between many species which confuses the model. You can help improve our model accuracy by contributing additional data to Wildlife Insights for your species of interest. As users upload more images - from different regions, more diverse species etc., our AI models will get better at recognizing more species.
If you do not see your species of interest listed it means that we currently have no examples of that species in our dataset. This is all the more reason to contribute, so we can continue to grow the number of supported species in Wildlife Insights.
Learn more about assessing classification accuracy for AI models in general.
Convolutional neural networks are a widely successful AI paradigm for computer vision models. At a high level, the model takes an image as a 2D input (array of pixels in single channel or RGB channels) and runs mathematical operations in a set of steps. Each step is referred to as a layer. There are some peculiar layer types used in CNNs for images, like convolution and pooling. Multiple such layers are collated together to form a deep convolutional neural network.
Some models have been trained using large amounts of generic image data that can be re-purposed by tuning them to specific problem (like species identification in camera trap images). We start with one such model called Inception-V4 and fine-tune the model for species classification in camera trap images using labeled data from Wildlife Insights.
Why fine-tune from a pre-trained model?
Fine-tuning is done to adapt the model to characteristics of camera trap images e.g. blurring, low lighting, etc. There are many common characteristics learned by the pre-trained model, like detecting edges of objects or identifying patterns like stripes and spots. These generic visual features are useful for identifying species in camera trap images, and by fine-tuning from a model that already has these capabilities, we are able to quickly leverage those features for our species classification task.
How is the model evaluated?
We believe it is important to evaluate our models in a method that is similar to how they will be used. We want to ensure that models will work well for new users, uploading data from camera locations unseen during training. In order to evaluate how well the model does on unseen data, we hold out the images from some of the camera locations in our dataset to serve as an unseen “test set”. We bin all of our dataset camera locations into 10x10meter lat/long grids, and then select a random set of these grid cells to serve as our test set. This ensures that we do not train and evaluate on similar images (e.g. same background), which may lead to incorrectly high accuracy numbers.
When a new image is uploaded, we do a forward pass over the trained network (i.e. run through all the layers one-by-one) and extract a probability distribution over all species classes. We then select the class with highest probability as the predicted class. We consider the probability of the highest class to represent the “confidence” of the model in its prediction1. In some cases, the model does not predict any of the classes with a high probability. When this occurs, we return “NO CV Result”, short for “No Computer Vision Result,” instead of returning a low-confidence species prediction. As the training dataset grows, our model will become more confident and return fewer “No CV Result” predictions.
During training, our AI models learn to recognize the unique characteristics of different species, such as patterns, textures, colors, etc. Using integrated gradients, we can visualize which parts of the image are most important to the model when making a species prediction. See examples of images and the associated integrated gradients visualizations below.


Wildlife Insights is hosted on the Google Cloud Platform, and inferencing is done using Google AI Platform Predictions. Once the images are uploaded, depending on your network bandwidth, we are capable of handling hundreds of queries per second (qps) for online prediction, parallelized across hundreds to thousands of machines. On a single GPU, we can process about 18,000 images per hour which can be scaled further by running across hundreds of GPUs. For reference, a human expert can label 600 images per hour. The purpose of AI models is to assist human experts by freeing them of flipping through most of the images, leaving only a fraction for their expert opinion.
If you have images that you have already labelled, that you would like to contribute to improving our AI models, or images that you would like to upload to Wildlife Insights, contact us to work on ingesting your data.
When you upload your camera trap images to Wildlife Insights, our AI models run image classification in the background and in the “Identify” tab, you can see the results. Alongside the image we display the common name (Genus and species) as well as confidence.
We return the species label for an image only if we are relatively sure about our prediction. If the AI is not confident, we return "No CV result" for “No Computer Vision result.” This confidence is based on the score the model returns for the top-most predicted class, which is on the scale of 0-100%.
The chart below indicates how the model performs for specific species. For example: if you upload an image that contains a "Red Deer", then 95.30% of the times we will correctly identify the class, but 3.42% we may not be confident about our prediction, so we return no result. This may happen due to multiple reasons, for example, if we did not have enough diverse images (different backgrounds, different profiles of the animal, different lighting conditions etc.) for this species when training or we have multiple similar looking species that confuses our model.
Our AI models are still learning! If you see an error in classification, you can click “Edit Identification” and correct the species label. This helps our model get better and better at improving for that species class.
Users of Wildlife Insights have the capability to edit the suggested result from the Wildlife Insights AI models. When we see that enough new images have user-generated or edited classifications, we will retrain our models with this new data. If you have camera trap images, you can directly contribute to the improvement of Wildlife Insights’ AI models, to help accurately identify the animals you care about. Please contact us to become a Wildlife Insights trusted tester.
For mammals, Wildlife Insights uses the IUCN Red List of Endangered Species as the primary taxonomy. For birds, we use Birdlife International’s taxonomy. We also have several classes for non-animals, such as car, equestrian, domestic dog, etc.
Users are also able to add custom notes to image metadata for local or indigenous names of species.
Wildlife Insights and Google are focusing on developing a model that can accurately identify species but not individual animals. There are other groups that are successfully training computer vision algorithms to identify individual animals, and we hope to work together with those groups in the future to continue to expand the scope of Wildlife Insights.
The infrastructure for Wildlife Insights can support video and other types of sensor data, including acoustic data, but initially only provides support for camera trap images. The long term plan for Wildlife Insights is to support multiple sensor data types.
For our first release, we really want to hear from you if you already have camera trap data, whether you’ve already catalogued it and labelled species in images or not. Wildlife Insights is open to anyone involved in camera trapping to advance wildlife conservation. Camera trap data providers may sign up for an account to share data and anyone can browse the global database.
Please contact us if you’d like to get added to the trusted tester group, in order to share your data and run it through the Wildlife Insights AI models.
Users are able to upload images containing humans to Wildlife Insights, and they will be classified by AI models in the uploading process. However, images of humans will not be made public, nor are they downloadable. Some wildlife researchers are interested in studying human-wildlife interactions. In order to facilitate this, metadata for images containing humans (that does not contain any personally identifiable information) will be available to the public for download (please refer to our Terms of Use for more information).
Yes! Depending on what group you most align with.
- If you are a wildlife expert, you can contribute by helping us refine our species metadata to understand the behavioral patterns of certain species. You can also help to improve our models, but giving feedback on our identifications if you find some mistakes.
- If you are computer scientist, we would be delighted to have collaborations on various research angles in computer vision and AI that are relevant in this domain. If you’re interested in contributing or being part of our discussions, you can email [email protected] and request to be invited to the AI for Conservation slack channel (https://aiforconservation.slack.com), where we have a #wildlifeinsights discussion thread. We have listed a few interesting research directions we have been exploring in2
- If you are a nature enthusiast, please explore the Discover page to see camera trap data from all around the world and learn about the amazing world of wildlife and biodiversity.
The field of species classification on sensor-based data is really just beginning. There are a number of approaches we will explore to continue to improve the models….and, if we haven’t mentioned it yet, more data always helps!
Here are some techniques and experiments we may run to improve our results:
- Leveraging the hierarchical structure of the taxonomy
- Including spatio-temporal information in training and/or when predicting
- Combining different types of AI models (e.g. bounding box detectors) to enable the classifier to focus on areas of interest within the image
- Leveraging sequential information inherent in camera trap images that appear in bursts
Wildlife Insights will not knowingly display or enable the download of images of humans in the public database. However, a record of the image (i.e., the date, time, identification) will be available for download*. Within a user’s private workspace, images of humans may be stored, hidden or deleted by the user.
*Public downloads are not yet available in Wildlife Insights
Wildlife Insights encourages users to share their data publicly but also recognizes that data providers may also want to publish their data. Wildlife Insights will provide the option to embargo data for 24 months before the data is made public. Data providers may request an additional 24 month embargo (for a total of 48 months) by contacting Wildlife Insights at [email protected] Embargoed data will not be available to the public for the duration of the embargo, but project metadata (e.g., project name, objectives) may be shared with the public.
Other users may need to keep data private in order to comply with legal requirements. If you are restricted from sharing data publicly, please contact [email protected] with details of your project and sharing requirements.
Note that by signing the Terms of Use, you provide Wildlife Insights and Wildlife Insights core partners permission to use your data, including embargoed data, to develop derived products, These derived products may be displayed on the Wildlife Insights website or used in presentations, for example, but will not be used in peer review publications without your consent.
Public downloads of Wildlife Insights data will be available in an upcoming release of the platform. Data available to the public will never include the exact location of sensitive species, images of humans or embargoed data.
Anyone who downloads data from Wildlife Insights must agree to the Terms of Use and provide their contact information and intended use of the data. The Terms of Use allow a user to share data and images in accordance with certain Creative Commons licenses.
While Wildlife Insights is committed to open data sharing, we recognize that revealing the location for certain species may increase their risk of threat. To protect the location of sensitive species, Wildlife Insights will obfuscate, or blur, the location information of all deployments made available for public download* so that the exact location of a deployment containing sensitive species cannot be determined from the data. Practices to obfuscate the location information associated with sensitive species may be updated from time to time with feedback from the community.
*Public downloads are not yet available in Wildlife Insights
Wildlife Insights runs on the Google Cloud Platform, which implements rigorous security practices to protect against unauthorized access. Click on the following links to learn more about Google Cloud security:
- https://cloud.google.com/security/infrastructure/
- https://cloud.google.com/security/overview/
- https://cloud.google.com/security/overview/whitepaper
In addition to the Wildlife Insights security measures provided by the Google Cloud Platform, the Wildlife Insights application provides additional security and protection via HTTPS. HTTPS (Hypertext Transfer Protocol Secure) is an internet communication protocol that protects the integrity and confidentiality of data between the user's computer and the site. Data sent using HTTPS is secured via Transport Layer Security protocol (TLS), which provides three key layers of protection:
- Encryption—encrypting the exchanged data to keep it secure from eavesdroppers. That means that while the user is browsing a website, nobody can "listen" to their conversations, track their activities across multiple pages, or steal their information.
- Data integrity—data cannot be modified or corrupted during transfer, intentionally or otherwise, without being detected.
- Authentication—proves that your users communicate with the intended website. It protects against man-in-the-middle attacks and builds user trust, which translates into other business benefits.
Wildlife Insights follows Creative Commons standards for licensing, which provide guidelines for how data should be shared and distributed.
For each project you create you will be prompted to assign Creative Commons licenses separately to the metadata and images. When your data is downloaded, you will receive a notification of the download with the user’s contact information and objectives for using the data. The user will also be provided with the Creative Commons license assigned to your data. It is up to the data user and provider to ensure the data is appropriately attributed.
*In future releases, each project in Wildlife Insights will also be associated with a permanent and unique identifier (i.e., a DOI) that can be referenced via a url. The identifier makes it easy for others to cite your work and acknowledge your contributions.
If your data is used, WI will provide attribution as required by the Creative Commons license assigned to the data. Attribution may include using your organization name and logo.
Creative Commons provides standardized licenses that make it easier for people to choose how their work is shared. For each project, data providers may choose to license data under Creative Commons licenses:
- Images (recorded data) may be licensed under CC0, CC BY or CC BY-NC.
- Metadata may be licensed under either CC 0 or CC BY.
These licenses are described below:
- Creative Commons Zero (CC0) permits a user to share, adapt and modify the work, even for commercial purposes, without asking permission (summary, full legal text) -
- Creative Commons Attribution 4.0 (CC BY 4.0), which permits a data user to share and adapt material with appropriate attribution, including for commercial purposes (summary, full legal text)
- Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0), which permits a data user to share and adapt material with appropriate attribution, only for noncommercial purposes (summary, full legal text)
For each project, data providers may choose to license data under Creative Commons licenses:
- Images (recorded data) may be licensed under CC0, CC BY or CC BY-NC.
- Metadata may be licensed under either CC 0 or CC BY.
These licenses are described below:
- Creative Commons Zero (CC0) permits a user to share, adapt and modify the work, even for commercial purposes, without asking permission (summary, full legal text) -
- Creative Commons Attribution 4.0 (CC BY 4.0), which permits a data user to share and adapt material with appropriate attribution, including for commercial purposes (summary, full legal text)
- Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0), which permits a data user to share and adapt material with appropriate attribution, only for noncommercial purposes (summary, full legal text)
By agreeing to the Terms of Use, you grant Wildlife Insights the right to use your data* for certain purposes including:
- The aggregation of wildlife data (i.e., summaries of data at a biome, national, global, or species level);
- The promotion of standardized protocols and best practices;
- The management of data and the provision of indicators;
- The generation of insights and visualizations for conservation action;
- Posting on Wildlife Insights social media accounts**;
- Creating publicity materials for Wildlife Insights**;
- Developing or improving computer vision models only for the purpose of advancing technology related to conservation;
- Developing derived products** for use in online forums, presentations, and peer-reviewed publications***.
*Including data on sensitive species, provided the use does not expose geographic location data.
**All of the mentioned uses will be with attribution to you.
**Derived products may be produced by Wildlife Insights or a Wildlife Insights core partner. Wildlife Insights will not publish derived products that include your embargoed data in peer-reviewed publications without your consent.
Wildlife Insights will not knowingly display or enable the download of images of humans in the public database. However, a record of the image (i.e., the date, time, identification) will be available for download*. Within a user’s private workspace, images of humans may be stored, hidden or deleted by the user.
*Public downloads are not yet available in Wildlife Insights
Derived products are aggregations of data, summary statistics and information products including charts, maps or graphs. Wildlife Insights may produce derived products to provide the public with timely information that captures large-scale biodiversity trends. In order for these metrics to be relevant and effective, the inclusion of recent or even near-real time information is key. Wildlife Insights endeavors to support this need, while respecting the data privacy terms of your dataset and ensuring data attribution.
If your data is used, WI will provide attribution as required by the Creative Commons license assigned to the data. Attribution may include using your organization name and logo.
Wildlife Insights permits data providers to embargo data for up to 24 months. Two extensions of up to 12 months each may be requested by sending an email to [email protected] Extension requests will be reviewed and approved by Wildlife Insights on a case by case basis. The embargo period is applied to an entire project, but is measured separately for each deployment (i.e., a unique placement of a camera in space and time). Embargoed data will not be available to users outside of your project. However, the metadata of any embargoed project will still be available in the public database. Note that by signing the Terms of Use, you provide Wildlife Insights and Wildlife Insights core partners permission to use your data, including embargoed data, to develop derived products. These derived products may be displayed on the Wildlife Insights website or used in presentations, for example, but will not be used in peer review publications without your consent.
You may remove unintended uploads if the removal is completed within 48 hours after data is uploaded to Wildlife Insights. After this brief period, you may only remove data from Wildlife Insights by sending a request to [email protected] Wildlife Insights administrators will review requests and grant approvals on a case by case basis.
If your account is deleted, your data will remain in the Wildlife Insights database. Your public data will remain accessible to other users and your embargoed data will remain embargoed through the end of the embargo period. If you are an organization administrator and delete your account, you will be prompted to assign another user to the administrator role.
We will have to provide you with ninety days’ notice of our intention to terminate the Service, third party sub-licensees working on improving computer vision models may still have indefinite access to your data but only for the purpose of advancing technology related to conservation and for no other reason.
Any data published on Wildlife Insights may be used by Wildlife Insights to develop aggregated data products, including global analyses. Wildlife Insights may use these analyses to produce annual reports on the state of wildlife.
Wildlife Insights will fuzz the exact coordinates of all deployments set at a location where a sensitive species is captured. The fuzzed coordinates will be provided in lieu of the exact coordinates in all public downloads. If you are downloading public data, you can determine which deployments have fuzzed coordinates by referring to the column titled Fuzzed in the deployments.csv file provided in your download package. If the value is True, the deployment's coordinates have been fuzzed. If the value is False, the coordinates provided are the exact coordinates provided to Wildlife Insights.
How does Wildlife Insights define sensitive species?
The list of sensitive species is defined and managed by Wildlife Insights based on best practices and expert consultations. The Wildlife Insights sensitive species list includes:
- All terrestrial vertebrates (mammals, amphibians, reptiles, and birds) with IUCN RedList categories CR, EN, and VU
- Species of local concern that don’t meet the above definition if requested by the project owner (functionality is coming soon)
While Wildlife Insights is committed to open data sharing, we recognize that revealing the location for certain species may increase their risk of threat. To protect the location of sensitive species, Wildlife Insights will obfuscate, or blur, the location information of all deployments made available for public download* so that the exact location of a deployment containing sensitive species cannot be determined from the data. Practices to obfuscate the location information associated with sensitive species may be updated from time to time with feedback from the community.
*Public downloads are not yet available in Wildlife Insights
By agreeing to the Terms of Use, you grant Wildlife Insights the right to use your data* for certain purposes including:
- The aggregation of wildlife data (i.e., summaries of data at a biome, national, global, or species level);
- The promotion of standardized protocols and best practices;
- The management of data and the provision of indicators;
- The generation of insights and visualizations for conservation action;
- Posting on Wildlife Insights social media accounts**;
- Creating publicity materials for Wildlife Insights**;
- Developing or improving computer vision models only for the purpose of advancing technology related to conservation;
- Developing derived products** for use in online forums, presentations, and peer-reviewed publications***.
*Including data on sensitive species, provided the use does not expose geographic location data.
**All of the mentioned uses will be with attribution to you.
**Derived products may be produced by Wildlife Insights or a Wildlife Insights core partner. Wildlife Insights will not publish derived products that include your embargoed data in peer-reviewed publications without your consent.
Wildlife Insights encourages users to share their data publicly but also recognizes that data providers may also want to publish their data. Wildlife Insights will provide the option to embargo data for 24 months before the data is made public. Data providers may request an additional 24 month embargo (for a total of 48 months) by contacting Wildlife Insights at [email protected] Embargoed data will not be available to the public for the duration of the embargo, but project metadata (e.g., project name, objectives) may be shared with the public.
Other users may need to keep data private in order to comply with legal requirements. If you are restricted from sharing data publicly, please contact [email protected] with details of your project and sharing requirements.
Note that by signing the Terms of Use, you provide Wildlife Insights and Wildlife Insights core partners permission to use your data, including embargoed data, to develop derived products, These derived products may be displayed on the Wildlife Insights website or used in presentations, for example, but will not be used in peer review publications without your consent.
Wildlife Insights promotes sharing information for the benefit of biodiversity conservation. We recognize however, that data providers also want to publish. Wildlife Insights will provide the option to embargo data for a limited period of 24 months before the data is made public. Data providers may request additional extensions by contacting Wildlife Insights at [email protected]
By agreeing to the Terms of Use, you grant Wildlife Insights the right to use your data* for certain purposes including:
- The aggregation of wildlife data (i.e., summaries of data at a biome, national, global, or species level);
- The promotion of standardized protocols and best practices;
- The management of data and the provision of indicators;
- The generation of insights and visualizations for conservation action;
- Posting on Wildlife Insights social media accounts**;
- Creating publicity materials for Wildlife Insights**;
- Developing or improving computer vision models only for the purpose of advancing technology related to conservation;
- Developing derived products** for use in online forums, presentations, and peer-reviewed publications***.
*Including data on sensitive species, provided the use does not expose geographic location data.
**All of the mentioned uses will be with attribution to you.
**Derived products may be produced by Wildlife Insights or a Wildlife Insights core partner. Wildlife Insights will not publish derived products that include your embargoed data in peer-reviewed publications without your consent.
Derived products are aggregations of data, summary statistics and information products including charts, maps or graphs. Wildlife Insights may produce derived products to provide the public with timely information that captures large-scale biodiversity trends. In order for these metrics to be relevant and effective, the inclusion of recent or even near-real time information is key. Wildlife Insights endeavors to support this need, while respecting the data privacy terms of your dataset and ensuring data attribution.
Public downloads of Wildlife Insights data will be available in an upcoming release of the platform. Data available to the public will never include the exact location of sensitive species, images of humans or embargoed data.
Anyone who downloads data from Wildlife Insights must agree to the Terms of Use and provide their contact information and intended use of the data. The Terms of Use allow a user to share data and images in accordance with certain Creative Commons licenses.