Jump to content

rahmansunbeam

Contributors
  • Content Count

    773
  • Joined

  • Last visited

  • Days Won

    103

rahmansunbeam last won the day on January 21

rahmansunbeam had the most liked content!

Community Reputation

707 Expert

6 Followers

About rahmansunbeam

  • Rank
    Senior Lecturer

Contact Methods

  • Website URL
    clubgis.net

Profile Information

  • Gender
    Male
  • Interests
    Women....or what?

Recent Profile Visitors

2,318 profile views
  1. Interesting! @juliusmall, have you tried ASTER GEDM v3? The problem you are facing are probably elevation void areas which are reduced in the new version released last year.
  2. @darksabersan interesting! Do share what you achieved later on. About RoboMaster, I haven't seen a programmable rc vehicle before. Definitely a better choice for smart kids. The price is ok, recommended for parents who's thinking about ipad and similar devices for a b-day gift. Drones on the other-hand should involve an adult when operated. BTW even for adults flying drones are not permitted in my country, only with special permissions. 😏
  3. I like drones but just got more interested in this,
  4. The nature of the service depends on various other thing, what type of feature you used, how did you published, what is the license and user level. Check this link.
  5. Simple Analysis of Vegetative Trends in Earth Engine - SAVETREE - is a tool developed in Google Earth Engine for the Lassen Volcanic National park, it estimates tree mortality by fitting a linear trend to time serries data of a user chosen spectral index. The user can export their new map in the form of TIFF files,add historic fire layers, and the user can produce graphs which view the values in the time series for a particular pixel by clicking on the layer. Running SAVETREE Hit the “run” button in the center panel to make the widget appear. Using SAVETREE the user can do the following things: * Spectral Index: Choose from NDMI, NDVI, NDWI or NBR to select which spectral index you would like to create a linear regression layer for. The default is NDMI. * Area of interest: Choose from Lassen Volcanic National Park, Lassen National Forest, DEVELOP T2 Study Area, the Badger Planning area or choose Your asset (below) to perform the analysis on an asset you load yourself see Loading an Asset for instructions on loading your own asset. The default is LVNP. * End year and duration: The year must be in YYYY format, it is the last year of the duration of the analysis. The duration should be a number less than 20, with the most meaningful results coming from 3-7 years, it is the number of years it will create the time series for. For example, if you put in 1990 and a duration of 3, the analysis will be run on 1988, 1989, and 1990. The defaults are 2016 and 5. * Add Coefficient map: Performs the coefficient trend map analysis on the spectral index and area of interest for the duration you supplied ending with the year you specified and adds that layer to the map. * Add Bivariate map: Performs the Bivariate map analysis on the spectral index and area of interest for the duration you supplied ending with the year you specified and adds that layer to the map. * Reset Map: Clears all layers. Note: it does not reset the area of interest or any items in the widget. To reset the area of interest, choose a different area of interest from the dropdown before running a new analysis. * Fire history start and end years: These years must be in YYYY format. These numbers create a filter for the fire history data where the only data to be added to the map will be fires or treatments that occurred during those years. * Fire History Dataset: Select from FRAP Statewide Wildfire Dataset, RX fire, Other treatment, or load your own fire data asset. To load your own asset see Loading an Asset. The wildfire, rx fire and other treatments are FRAP datasets, for more details on the FRAP data and for the most up-to-date data sets please go to http://frap.fire.ca.gov/projects/fire_data/fire_perimeters_index * Export Coefficient Map: Exports the Coefficient trend layer as a TIFF file. See Exporting a Layer to get details on how to export layers to your Google Drive. * Export Bivariate Map: Exports the Bivariate map layer as a TIFF file. See Exporting a Layer to get details on how to export layers to your Google Drive. * Change Inspector: Click on any part of the Coefficient Trend or Bivariate Map layers and a graph of the change during each year for your duration for that particular point will appear at the bottom of the widget. Click the little box with the arrow in the upper right hand corner of the graph to open the graph in a new tab. You can download this graph from this new tab. SAVETREE was developed over two terms with DEVELOP: * Authors v1.0: Joshua Verkerke, Anna McGarrigle, John Dilger * Authors v2.0: Heather Myers, Anna McGarrigle, Peter Norton, Andrea Ferrer Download code
  6. Saw a similar news last month - Using Machine Learning to “Nowcast” Precipitation in High Resolution by Google. The result seemed pretty good. Here, A visualization of predictions made over the course of roughly one day. Left: The 1-hour HRRR prediction made at the top of each hour, the limit to how often HRRR provides predictions. Center: The ground truth, i.e., what we are trying to predict. Right: The predictions made by our model. Our predictions are every 2 minutes (displayed here every 15 minutes) at roughly 10 times the spatial resolution made by HRRR. Notice that we capture the general motion and general shape of the storm. The two method seem similar.
  7. Google announced Dataset Search, a service that lets you search for close to 25 million different publicly available data sets, is now out of beta. Dataset Search first launched in September 2018. Researchers can use these data sets, which range from pretty small ones that tell you how many cats there were in the Netherlands from 2010 to 2018 to large annotated audio and image sets, to check their hypotheses or train and test their machine learning models. The tool currently indexes about 6 million tables. With this release, Dataset Search is getting a mobile version and Google is adding a few new features to Dataset Search. The first of these is a new filter that lets you choose which type of data set you want to see (tables, images, text, etc.), which makes it easier to find the right data you’re looking for. In addition, the company has added more information about the data sets and the organizations that publish them. Searched 'remote sensing' and found this Geographic information A lot of the data in the search index comes from government agencies. In total, Google says, there are about 2 million U.S. government data sets in the index right now. But you’ll also regularly find Google’s own Kaggle show up, as well as a number of other public and private organizations that make public data available, as well. As Google notes, anybody who owns an interesting data set can make it available to be indexed by using a standard schema.org markup to describe the data in more detail. Source
  8. HDF format works best in QGIS because of GDAL, but you can use ArcGIS too. It is not clear if you have multidimensional raster or multi-point vector. Please write in details.
  9. IMHO switching two software for same operation shows different result because each takes different parser and environmental parameters, but they can be tweaked too. If you can set those as close as possible, the result will surely be similar. BTW, why do would you need separate softwares to process same image in the first place?
  10. This is a very interesting mapping platform for the agriculture community. The Belarus-based startup platform uses Sentinel-2 data and AI to instantly delineate thousands of crop fields and status of 20 plus crops in USA and Europe. They also have smartphone-based apps which you can use to find these solutions for your field as well. The platform applies Machine Learning, which constantly improves the service as more data and feedback is collected. Considering that a mind-boggling 376,835,301 hectares of fields across Europe and the USA have already been analyzed and catalogued, the system has reached a remarkable level of maturity. OneSoil — a Copernicus-enabled start-up from Belarus Check out their interactive map. Onesoil homepage
  11. It's 'techtober' again. Microsoft unveiled a lot of new devices yesterday. A refreshed lineup with Surface Laptop and 2-in-1 with usb-c, a new wireless earbuds and two dual-screen devices. One of them is a tablet (Surface Neo) and another is a phone (Surface Duo). This also has a new version of Windows - Windows 10x. This is called Surface Neo. This is Surface Duo. I particularly like this because this is probably the best implementation of dual-screen in smartphone until today. It runs a customised version of Android. These two are available later next year. MICROSOFT SURFACE NEO FIRST LOOK: THE FUTURE OF WINDOWS 10X IS DUAL-SCREEN A FIRST LOOK AT SURFACE DUO, MICROSOFT’S FOLDABLE ANDROID PHONE
  12. deck.gl (developed by Uber) is a WebGL-powered framework for visual exploratory data analysis of large datasets. deck.gl is designed to make visualization of large data sets simple. It enables users to quickly get impressive visual results with limited effort through composition of existing layers, while offering a complete architecture for packaging advanced WebGL based visualizations as reusable JavaScript layers. The basic idea of using deck.gl is to render a stack of visual overlays, usually (but not always) over maps. To make this simple concept work, deck.gl handles a number of challenges: Handling of large data sets and performant updates Interactive event handling such as picking Cartographic projections and integration with underlying map A catalog of proven, well-tested layers Easy to create new layers or customize existing layers Tutorials Getting started Uber's Vis.gl in Medium
  13. The GeoforGood Summit 2019 drew its curtains close on 19 Sep 2019 and as a first time attendee, I was amazed to see the number of new developments announced at the summit. The summit — being a first of its kind — combined the user summit and the developers summit into one to let users benefit from the knowledge of new tools and developers understand the needs of the user. Since my primary focus was on large scale geospatial modeling, I attended the workshops and breakout sessions related to Google Earth Engine only. With that, let’s look at 3 new exciting developments to hit Earth Engine Updated documentation on machine learning Documentation really? Yes! As an amateur Earth Engine user myself, my number one complaint of the tool has been its abysmal quality of documentation spread between its app developers site, Google Earth’s blog, and their stack exchange answers. So any updates to the documentation is welcome. I am glad that the documentation has been updated to help the ever-exploding user base of geospatial data scientists interested in implementing machine learning and deep learning models. The documentation comes with its own example Colab notebooks. The Example notebooks include supervised classification, unsupervised classification, dense neural network, convolutional neural network, and deeplearning on Google Cloud. I found that these notebooks were incredibly useful to me to get started as there are quite a few non-trivial data type conversions ( int to float32 and so on) in the process flow. Earth Engine and AI Platform Integration Nick Clinton and Chris Brown jointly announced the much overdue Earth Engine + Google AI Platform integration. Until now, users were essentially limited to running small jobs on Google Colab’s virtual machine (VM) and hoping that the connection with the VM doesn’t time out (which usually lasts for about 4 hours). Other limitations include lack of any task monitoring or queuing capabilities. Not anymore! The new ee.Model() package let’s users communicate with a Google Cloud server that they can spin up based on their own needs. Needless to say, this is a HUGE improvement over the previous primitive deep learning support provided on the VM. Although it was free, one could simply not train, validate, predict, and deploy any model larger than a few layers. It had to be done separately on the Google AI Platform once the .TFRecord objects were created in their Google bucket. With this cloud integration, that task has been simplified tremendously by letting users run and test their models right from the Colab environment. The ee.Model() class comes with some useful functions such as ee.Model.fromAIPlatformPredictor() to make predictions on Earth Engine data directly from your model sitting on Google Cloud. Lastly, since your model now sits in the AI Platform, you can cheat and use your own models trained offline to predict on Earth Engine data and make maps of its output. Note that your model must be saved using tf.contrib.saved_model format if you wish to do so. The popular Keras function model.save_model('model.h5') is not compatible with ee.Model(). Moving forward, it seems like the team plans to stick to the Colab Python IDE for all deep learning applications. However, it’s not a death blow for the loved javascript code editor. At the summit, I saw that participants still preferred the javascript code editor for their non-neural based machine learning work (like support vector machines, random forests etc.). Being a python lover myself, I too go to the code editor for quick visualizations and for Earth Engine Apps! I did not get to try out the new ee.Model() package at the summit but Nick Clinton demonstrated a notebook where a simple working example has been hosted to help us learn the function calls. Some kinks still remain in the development— like limiting a convolution kernel to only 144 pixels wide during prediction because of “the way earth engine communicates with cloud platform” — but he assured us that it will be fixed soon. Overall, I am excited about the integration because Earth Engine is now a real alternative for my geospatial computing work. And with the Earth Engine team promising more new functions in the ee.Model() class, I wonder if companies and labs around the world will start migrating their modeling work to Earth Engine. Cooler Visualizations! Matt Hancher and Tyler Erickson displayed some new functionality related to visualizations and I found that it made it vastly simpler to make animated visuals. With ee.ImageCollection.getVideoThumbURL() function, you can create your own animated gifs within a few seconds! I tried it on a bunch of datasets and the speed of creating the gifs was truly impressive. Say bye to exporting each iteration of a video to your drive because these gifs appear right at the console using the print() command! Shown above is an example of global temperature forecast by time from the ‘NOAA/GFS0P25’ dataset. The code for making the gif can be found here. The animation is based on the example shown in the original blog post by Michael DeWitt and I referred to this gif-making tutorial on the developers page to make it. I did not get to cover all the new features and functionality introduced at the summit. For that, be on the lookout for event highlights on Google Earth’s blog. Meanwhile, you can check out the session resources from the summit for presentations and notebooks on topics that you are interested in. Presentation and resources Published in Medium
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.