Skip to content

Commit b3d6673

Browse files
authored
Fixing various broken links (#674)
1 parent 41b827c commit b3d6673

File tree

7 files changed

+26
-16
lines changed

7 files changed

+26
-16
lines changed

contrib/action_recognition/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Action recognition (also known as activity recognition) consists of classifying
66

77
![](./media/action_recognition2.gif "Example of action recognition")
88

9-
We implemented two state-of-the-art approaches: (i) [I3D](https://arxiv.org/pdf/1705.07750.pdf) and (ii) [R(2+1)D](https://arxiv.org/abs/1711.11248). This includes example notebooks for e.g. scoring of webcam footage or fine-tuning on the [HMDB-51](http://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/) dataset. The latter can be accessed under [scenarios](../scenarios) at the root level.
9+
We implemented two state-of-the-art approaches: (i) [I3D](https://arxiv.org/pdf/1705.07750.pdf) and (ii) [R(2+1)D](https://arxiv.org/abs/1711.11248). This includes example notebooks for e.g. scoring of webcam footage or fine-tuning on the [HMDB-51](http://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/) dataset. The latter can be accessed under [scenarios](../../scenarios) at the root level.
1010

1111
We recommend to use the **R(2+1)D** model for its competitive accuracy, fast inference speed, and less dependencies on other packages. For both approaches, using our implementations, we were able to reproduce reported accuracies:
1212

contrib/crowd_counting/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ While there's a wide range of crowd counting models, two practical matters need
1212
- Speed. To support near real time reporting, the model should run fast enough.
1313
- Crowd density. We need to allow for both high-density and low-density scenarios for the same camera. Most crowd counting models were trained using high density datasets and they tend not to work well for low density scenarios. On the other hand, models like Faster-RCNN work well for low density crowd but not so much for high density scenarios.
1414

15-
Based on evaluation of multiple implementations of Crowd Counting models on our propietary dataset, we narrowed down the models to two options: the Multi Column CNN model (MCNN) from [this repo](https://github.com/svishwa/crowdcount-mcnn) and the OpenPose model from [this repo](https://github.com/ildoonet/tf-pose-estimation). Both models met our speed requirements.
15+
Based on evaluation of multiple implementations of Crowd Counting models on our propietary dataset, we narrowed down the models to two options: the Multi Column CNN model (MCNN) from [this repo](https://github.com/svishwa/crowdcount-mcnn) and the OpenPose model from [this repo](https://github.com/jiajunhua/ildoonet-tf-pose-estimation). Both models met our speed requirements.
1616
- For high density crowd images, the MCNN model delivered good results.
1717
- For low density scenarios, OpenPose performed well.
1818
- When crowd density if unknown beforehand, we use a heuristic approach: the prediction from MCNN is used if the following conditions are met: OpenPose prediction is above 20 and MCNN is above 50. Otherwise, the OpenPose prediction used. The thresholds for the models can be changed depending on your scenario.

contrib/crowd_counting/crowdcounting/examples/tutorial.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@
5656
"cell_type": "markdown",
5757
"metadata": {},
5858
"source": [
59-
"Here we load the model and make predictions. By default, we used the CrowdCountModlePose() function which uses OpenPose model as implemented by [this GitHub repo](https://github.com/ildoonet/tf-pose-estimation). \n",
59+
"Here we load the model and make predictions. By default, we used the CrowdCountModlePose() function which uses OpenPose model as implemented by [this GitHub repo](https://github.com/jiajunhua/ildoonet-tf-pose-estimation). \n",
6060
"\n",
6161
"Another option is the CrowdCountModelMCNN() function which uses the MCNN model as implemented [here](https://github.com/svishwa/crowdcount-mcnn).\n",
6262
"\n",

contrib/html_demo/JupyterCode/1_image_similarity_export.ipynb

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,11 @@
1515
"source": [
1616
"# Image Similarity Export\n",
1717
"\n",
18-
"In the Scenario->Image Similarity notebook [12_fast_retrieval.ipynb](12_fast_retrieval.ipynb) we implemented the approximate nearest neighbor search method to find similar images from a group of reference images, given a query input image. This notebook repeats some of those steps with the goal of exporting computed reference image features to text file for use in visualizing the results in an HTML web interface. \n",
18+
"In the Scenario->Image Similarity notebook [12_fast_retrieval.ipynb](../../../scenarios/similarity/12_fast_retrieval.ipynb) we implemented the approximate nearest neighbor search method to find similar images from a group of reference images, given a query input image. This notebook repeats some of those steps with the goal of exporting computed reference image features to text file for use in visualizing the results in an HTML web interface. \n",
1919
"\n",
2020
"To be able to test the model in a simple HTML interface, we export: the computed reference image features, a separate text file of reference image file names, and thumbnail versions of the reference images. The first two files are initially exported as text files then compressed into zip files to minimuze file size. The reference images are converted to 150x150 pixel thumbnails and stored in a flat directory. All exports are saved to the UICode folder. Notebook **2_upload_ui** is used to upload the exports to your Azure Blob storage account for easy public access. \n",
2121
"\n",
22-
"It is assumed you already completed the steps in notebook [12_fast_retrieval.ipynb](12_fast_retrieval.ipynb) and have deployed your query image processing model to an Azure ML resource (container services, Kubernetes services, ML web app, etc.) with a queryable, CORS-compliant API endpoint."
22+
"It is assumed you already completed the steps in notebook [12_fast_retrieval.ipynb](../../../scenarios/similarity/12_fast_retrieval.ipynb) and have deployed your query image processing model to an Azure ML resource (container services, Kubernetes services, ML web app, etc.) with a queryable, CORS-compliant API endpoint."
2323
]
2424
},
2525
{
@@ -374,9 +374,9 @@
374374
],
375375
"metadata": {
376376
"kernelspec": {
377-
"display_name": "Python (cv)",
377+
"display_name": "Python 3.6.9 64-bit",
378378
"language": "python",
379-
"name": "cv"
379+
"name": "python3"
380380
},
381381
"language_info": {
382382
"codemirror_mode": {
@@ -388,7 +388,12 @@
388388
"name": "python",
389389
"nbconvert_exporter": "python",
390390
"pygments_lexer": "ipython3",
391-
"version": "3.6.8"
391+
"version": "3.6.9"
392+
},
393+
"vscode": {
394+
"interpreter": {
395+
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
396+
}
392397
}
393398
},
394399
"nbformat": 4,

contrib/html_demo/JupyterCode/readme.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ This directory contains a few helper notebooks that upload files and deploy mode
1515
### Requirements
1616

1717
To run the code in the [2_upload_ui.ipynb](2_upload_ui.ipynb) notebook, you must first:
18-
1. Install the [https://pypi.org/project/azure-storage-blob/](Azure Storage Blobs client library for Python)
18+
1. Install the [Azure Storage Blobs client library for Python](https://pypi.org/project/azure-storage-blob/)
1919
2. Have (or create) an Azure account with a Blob storage container where you would like to store the UI files
2020
3. Note your Blob stoarge credentials to upload files programmatically; you will need:
2121
a. Azure Account Name

scenarios/classification/FAQ.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,12 +91,12 @@ The test set should contain images which resemble the population the model will
9191

9292
### How to speed up training?
9393
- All images can be stored on a local SSD device, since HDD or network access times can dominate the training time.
94-
- High-resolution images can slow down training due to JPEG decoding becoming the bottleneck (>10x performance penalty). See the [02_training_accuracy_vs_speed.ipynb](notebooks/02_training_accuracy_vs_speed.ipynb) notebook for more information.
94+
- High-resolution images can slow down training due to JPEG decoding becoming the bottleneck (>10x performance penalty). See the [03_training_accuracy_vs_speed.ipynb](03_training_accuracy_vs_speed.ipynb) notebook for more information.
9595
- Very high-resolution images (>4 Mega Pixels) can be downsized before DNN training..
9696

9797

9898
### How to improve accuracy or inference speed?
99-
See the [02_training_accuracy_vs_speed.ipynb](notebooks/02_training_accuracy_vs_speed.ipynb) notebook for a discussion around which parameters are important, and strategies to select a model optimized for faster inferencing speed.
99+
See the [03_training_accuracy_vs_speed.ipynb](03_training_accuracy_vs_speed.ipynb) notebook for a discussion around which parameters are important, and strategies to select a model optimized for faster inferencing speed.
100100

101101

102102
### How to monitor GPU usage during training?

scenarios/similarity/12_fast_retrieval.ipynb

Lines changed: 10 additions & 5 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)