List of Questions

C
Crop Production ServicesWorkshop language: English
  • 1562

    when using historical imagery sets (such as NDVI) is there a best method for handling historical farming practices?


    When looking at historical variability of farm land using an index such as an NDVI, farms which contain lots of crops such as pulses with poor ground cover can negatively affect the historical imagery set. Is there a different index to use for pulse crops, or different indexes for all crop types that would best capture the historical variability

  • 1563

    With technological advances to agriculture equipment, are we headed towards current imagery sources such as LandSat becoming less useful due to a larger resolution than other providers


    Currently many precision applications have been built using imagery such as landsat with a 30m resolution. New farm equipment with sectional control can begin to manage fields on a smaller resolution than these satellites can provide.

  • 1564

    How much of an effect does resolution have on biomass indices for larger areas (field size ~160 acres) and at that scale is greater resolution truly needed in products for decision making


    If looking at different imagery sources with varying resolution. what is the affect of the varying resolution on the biomass representation.

  • 1565

    is there a better index to use than NDVI specific to each western Canadian crop type we use?


    currently we use NDVI to evaluate many crop types throughout the prairies, however is there a better way to evaluate different crop types? what affect does the same biomass of different species have on the NDVI index

  • 1566

    How do the canopies of different crops affect the light refraction portion in the NDVI index? Do the canopies’ shape affect the wavelength and intensity of refracted light? Can this information be used to calibrate our model based on knowledge of crop rotations?


    Similar to the question posed around crop species having an effect on the NDVI value, what is the effect of plant architecture and structure have on the NDVI value?

  • 1567

    Can NVDI and generally infrared light refraction from bare soil provide insight into the mineral composition of the soil, specifically related to minerals affecting the nutrient concentration and retention (i.e. apatites and hydroxides for P, kaolinite or montmorillonite clays for K, chelates for organic N)?


    our current practice across the prairies is to use random soil sampling to determine nutrient availability in the top soil layers. is there a method using hyper spectral sensing that may be able to estimate available nutrients

  • 1568

    Is there a best practice for combining imagery sources of different resolutions?


    as we start to look into using different remote sensing and imagery types in agriculture, is there a best practice to combine multiple imagery sources, particularly or varying resolution to properly represent the field variability in the field

  • 1569

    Is there an index that could be used or would be best suited for determining agronomic crop attributes such as canola flowering precentage?


    Every year agronomist subjectively guess the flowering stage of canola crops to determine when they should be sprayed by a high value fungicide application. is there an index we could look into ground truthing with agronomists in the field to empirically determine crop flowering stage.


M
Mosaic Potash Workshop language: English
  • 1577

    Pipeline Imaging - How can we use imaging from drones to review the integrity of pipelines under water/covered by vegetation? How can we use software to automatically process drone pipeline thermal images for hotspot detection?


    Pipeline Imaging

  • 1578

    Metallurgical/Chemical Analysis Imaging - Is it possible to classify potash ores visually in-situ/on conveyor belts e.g. determine the mineralogy, crystal size, grade/assay, iron insols, water content, magnesium? Is it possible to use imaging to determine % solids in brine slurries? In brine solutions is there imaging which would be able to discern between KCl and NaCl? Is it possible to use imaging to determine final (dry) product grade? Is there any imaging technology that can measure moisture of moving material on a conveyor belt over its complete depth not just the material on the surface?


    Metallurgical/Chemical Analysis Imaging

  • 1579

    Screen Inspection Imaging - We’re looking for an imaging solution to speed up and quantify inspection of our screens i.e. that can calculate open area and % blinding, average aperture size and highlight wear and screen rips for the following scenarios: (a) Sieve bends, (b) Vibratory screens and (c) Dry stacked screens.


    We’re aware of Metso’s work back in 2011 (attached). We’ve reached out to Metso and not had any response back. We believe this was not developed any further.


P
Petroleum Technology Research CentreWorkshop language: English
  • 1572

    What are the limits of scale on subsurface monitoring and could centimetre scale void spaces be resolved at 300m depth?


    This questions speaks to the issue of wormholes in heavy oil reservoirs from which thousands of cubic metres of sand have been produced/removed.

  • 1573

    Can Multiphase fluid flow through nano-porous media be resolved in real time?


    Many issues around production of oil and gas from very "tight" rocks is around the nature of the fluid flow regime that dominates the mobilization process. Understanding these processes would help in optimizing production and enhanced recovery schemes.


Philips CanadaWorkshop language: English
  • 1555

    How large does an annotated image set have to be to create powerful deep learning applications that can be used routinely across multiple laboratories and how important is workflow integration in driving adoption by pathology community?


    Digital Pathology and Philips: Digital pathology offers an opportunity to scan the glass slides into whole slide images enabling them to be viewed by pathologists anywhere in the world as well as to correlate with digitally archived image data. Moreover, this digitization has made it possible to apply a variety of computational tools and algorithms to assist pathologists in (i) objectively analyzing tissue and (ii) streamlining and accelerating digital workflows. Digital pathology is therefore underpinning the next wave of advances shifting pathology practice into an era of computational medicine and enhancing human expertise. The future holds great promise where digital pathology will be empowered by new computational tools driven by deep learning, and making it possible to automate the analysis and annotation of the whole slide images. Researchers at Philips are building a range of powerful algorithms to support pathologists in some of the most challenging areas of diagnostic practice. Using their unique scanning platform they are building the world’s largest data-lake of highly annotated images together with comprehensive outcome data in cancer and other diseases. Philips applications will become an integral part of their portfolio as they continue to innovate and create new technologies to support diagnostics and discovery in pathology. Move to Computational Pathology and Deep Learning: As mentioned earlier, computational pathology is the next obvious transformation. Philips has taken a lead in this direction. Philips is collaborating with PathAI for better breast cancer diagnosis with deep learning applied to histopathology images [1]. For deep learning based methods to be successfully used as decision support tools, they need to be resilient to the data acquired from different sources, different staining and cutting protocols and different scanners. In a recent study [2], a deep learning based method is evaluated for its accuracy and robustness in automatically identifying the extent of invasive breast tumor on digitized whole slide images. Their method employs a Convolutional Neural Network (CNN) for detecting presence of invasive tumor on whole slide images. One of their findings is that increasing the dataset size and diversity results in a better and more robust algorithm. Also, their models failed for rare invariants of invasive ductal carcinoma such as mucinous invasive carcinoma and they plan to include learning examples of this invariant in their future work. One of the recent work by google [3] to detect cancer metastases uses CNN on gigapixel pathology images to achieve high accuracy over human evaluation and other existing automated approaches. Their method involves training a CNN over small patches of 270 whole slide images from the Camelyon 2016 dataset and validating it for the remaining 130 images. They also test their approach for another 110 slides from a different source. They conclude that their future work will focus on improvements utilizing larger datasets.

  • 1576

    Will automatic delineation of tumors using multi-modal imaging and Machine Learning be able to provide delineation accuracy comparable to a trained medical practitioner and accepted in clinical practice?


    Radiation Oncology and Philips: Fast and robust delineation of target volumes and organs at risk (OARs) is key for improving efficiency of radiation treatment planning. Although commercial products for automatic delineation of OARs are available nowadays, fully automatic delineation of tumors remains challenging. Philips pioneered in the area of automated organ delineation by introducing a proprietary model-based segmentation technology [1, 2], which became an integral piece of its radiation therapy planning software. Machine Learning in Radiation Oncology. Machine Learning (ML) has been successfully applied in several areas of Radiation Oncology, for example, to improve robustness of automatic segmentation algorithms [2] as well as for automatic positioning of radiation beams in certain types of radiation treatment [3]. Currently, very large collections of data are becoming available for researchers, and application of powerful ML methods, which was hampered by the lack of appropriate data in the past, is now feasible. Feature extraction from different sources of anatomical and physiological information, such as CT, CBCT, MRI and PET images in combination with ML is an attractive approach to address a very challenging problem of automatic tumor delineation, which has always been a sole prerogative of a physician.


POS BioSciencesWorkshop language: English
  • 1547

    Can you devise a method to determine the oil load and measure the level of oxidation in a microencapsulated powdered oil?


    We develop microencapsulated Omega-3 oils for the industry and also have our own high DHA (i.e. 40%) algal oil product. We want to confirm the oil load (i.e. concentration) in our microcapsules. The stability of determined oil by peroxide value and para-anisidine value is also of interest however we do not feel these analyses will be accurate if the oil needs to be extracted from the microcapsules prior to analysis. Our microcapsule wall materials are comprised of protein and carbohydrate.

  • 1549

    Can you quantify the surface oil of a microencapsulated powdered oil?


    We develop microencapsulated Omega-3 oils for the industry and also have our own high DHA (i.e. 40%) algal oil product. Our microcapsules are comprised of protein and carbohydrate.

  • 1557

    Predicting the proximate composition of a biological materials using imaging technology


    It will be useful if we have a rapid tool to predict the proximate composition of biological materials, it will provide valuable information that can be used for making process changes/decisions quickly.


Prairie Agricultural Machinery InstituteWorkshop language: English
  • 1532

    What is the simplest and most effective imaging methodology for assessing level of crop residue degradation in the field?


    We will be initiating a three year research project to evaluate the effect of various treatments on degradation of flax residue. Managing flax residue usually requires baling and/or burning in the field; producers would like alternative solutions to improve degradation rates in the field to minimize required management and allow them to recover valuable nutrients in the residue.


S
SED SystemsWorkshop language: English
  • 1560

    Are there imaging methods that could define the success of a bond between composite fibers and their resin matrix in various composite materials?


    Understanding the method of bonding between the structural fiber and its resin matrix is of interest for the initial properties of the composite and its long term survivability

  • 1561

    Are ther imaging methods which could define the success (eg shear strength) of adhesively bonded composite parts ?


    We have many composite parts to bond together in our products, and understanding the strength of the bonded joint as a function of application process is very useful


Shearer Agricultural Imaging & Remote Sensing Ltd.Workshop language: English
  • 1570

    Where are we at with respect to machine vision that recognizes weeds and/or insect pests in common agricultural crops in Manitoba, Saskatchewan, and Alberta.


    In 2015 I retired from forensic science. Specifically fingerprint identification and AFIS searching. Over the last two years I have taken the principles of AFIS and forensic imaging and applied them to agronomy. I have come up with a boom mounted camera system on a quad that travels fields at 30+ kph and takes thousands of high resolution geotagged images per hour. Some of these images contain field weeds and/or insect pests. Others images, (most) are not remarkable. The next step in my project is to leverage machine vision on my growing database of local fields and weed pressures. I suspect that machine vision applied in this setting could, much like fingerprint searching, produce a short list of images that would be for example, high probability of Canadian thistle. I am not a computer programmer and need expertise in this next phase of development. Attached are a couple of sample images that were collected from a quad rolling at 30-35 kph. Weed identification, right down to flea beetle identification in canola has been shown possible.

  • 1571

    Spectral or spatial resolution in agronomy, which is likely to prevail in crops of the Prairie Provinces?


    In agronomy I use near infrared and RGB sensors. In forensics I used the same near infrared sensors as well as UV and chemo luminescence. I always found the multi spectral tools to be great for searching crime scenes and presumptive testing, but the number of false positives in the various wavelengths almost always precluded identification. In agronomy, even with hyper spectral sensors, I fear the number of false positives will lead to spectral analysis being risky in the decision making process. Going to higher spatial resolutions will simply let us see and identify the insect or weed instead of guessing what it is under the cloak of an NDVI. I think like in forensics, spectral resolution will be a searching tool and spatial resolution will be an identification tool. All thoughts welcome!


Siemens Healthcare LimitedWorkshop language: English
  • 1499

    How do standard (approved) methods of measurement of iron content in liver patients compare with available MRI methods?


    N/A

  • 1500

    Do advanced MR Elastography methods accurately quantify tissue properties in patients with elevated iron content? Is MRI Elastography or US Elastography a more robust technique for the evaluation of liver fibrosis in patients?


    N/A


Sprockety Ventures Inc.Workshop language: English
  • 1550

    How can you use machine learning algorithms to detect cancer in MRIs or other volumetric datasets?


    Sprockety has developed VR software called SLICESVR that allows medical practitioners, scientists, and researchers to view and manipulate images (e.g., MRI, CT, etc.) for purposes of analysis and diagnosis. We are always looking to add new features and functionalities to our software.

  • 1551

    How can volumetric images, generated at different times, be aligned to each other (i.e., one might be rotated slightly)?


    Sprockety has developed VR software called SLICESVR that allows medical practitioners, scientists, and researchers to view and manipulate images (e.g., MRI, CT, etc.) for purposes of analysis and diagnosis. We are always looking to add new features and functionalities to our software.

  • 1552

    How can we segment organs or other structures from a volumetric image into a mesh?


    Sprockety has developed VR software called SLICESVR that allows medical practitioners, scientists, and researchers to view and manipulate images (e.g., MRI, CT, etc.) for purposes of analysis and diagnosis. We are always looking to add new features and functionalities to our software.

  • 1553

    How can we find the optimal LUTs for viewing medical MRI or CT data?


    Sprockety has developed VR software called SLICESVR that allows medical practitioners, scientists, and researchers to view and manipulate images (e.g., MRI, CT, etc.) for purposes of analysis and diagnosis. We are always looking to add new features and functionalities to our software.