Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 09/23/2019 in all areas

  1. 3 points
    The first thing to do before mapping is to set up the camera parameters. Before to set up camera parameters, recommended resetting the all parameters on camera first. To set camera parameters manually need to set to manual mode. Image quality: Extra fine Shutter speed: to remove blur from photo shutter speed should be set for higher value. 1200–1600 is recommended. Higher the shutter speed reduce image quality . if there is blur in the image increase shutter speed ISO: lower the ISO higher image quality. ISO between 160–300 is recommended. if there is no blur but image quality is low, reduce ISO. Focus: Recommended to set the focus manually on the ground before a flight. Direct camera to an object which is far, and slightly increase the focus, you will see on camera screen that image sharpness changes by changing the value. Set the image sharpness at highest. (slide the slider close to infinity point on the screen you will see the how image sharpness changes by sliding) White balance: recommended to set to auto. On surveying mission Sidelap, Overlap, Buffer have to be set higher to get better quality surveying result. First set the RESOLUTION which you would like to get for your surveying project. When you change resolution it changes flight altitude and also effects the coverage in a single flight. Overlap: 70% This will increase the number of photos taken during each flight line. The camera should be capable to capture faster. Sidelap: recommended 70% Flying with higher side-lap between each line of the flight is a way to get more matches in the imagery, but it also reduces the coverage in a single flight Buffer: 12% Buffer increases the flight plane to get more images from borders. It will improve the quality of the map source: https://dronee.aero/blogs/dronee-pilot-blog/few-things-to-set-correctly-to-get-high-quality-surveying-results
  2. 3 points
    The GeoforGood Summit 2019 drew its curtains close on 19 Sep 2019 and as a first time attendee, I was amazed to see the number of new developments announced at the summit. The summit — being a first of its kind — combined the user summit and the developers summit into one to let users benefit from the knowledge of new tools and developers understand the needs of the user. Since my primary focus was on large scale geospatial modeling, I attended the workshops and breakout sessions related to Google Earth Engine only. With that, let’s look at 3 new exciting developments to hit Earth Engine Updated documentation on machine learning Documentation really? Yes! As an amateur Earth Engine user myself, my number one complaint of the tool has been its abysmal quality of documentation spread between its app developers site, Google Earth’s blog, and their stack exchange answers. So any updates to the documentation is welcome. I am glad that the documentation has been updated to help the ever-exploding user base of geospatial data scientists interested in implementing machine learning and deep learning models. The documentation comes with its own example Colab notebooks. The Example notebooks include supervised classification, unsupervised classification, dense neural network, convolutional neural network, and deeplearning on Google Cloud. I found that these notebooks were incredibly useful to me to get started as there are quite a few non-trivial data type conversions ( int to float32 and so on) in the process flow. Earth Engine and AI Platform Integration Nick Clinton and Chris Brown jointly announced the much overdue Earth Engine + Google AI Platform integration. Until now, users were essentially limited to running small jobs on Google Colab’s virtual machine (VM) and hoping that the connection with the VM doesn’t time out (which usually lasts for about 4 hours). Other limitations include lack of any task monitoring or queuing capabilities. Not anymore! The new ee.Model() package let’s users communicate with a Google Cloud server that they can spin up based on their own needs. Needless to say, this is a HUGE improvement over the previous primitive deep learning support provided on the VM. Although it was free, one could simply not train, validate, predict, and deploy any model larger than a few layers. It had to be done separately on the Google AI Platform once the .TFRecord objects were created in their Google bucket. With this cloud integration, that task has been simplified tremendously by letting users run and test their models right from the Colab environment. The ee.Model() class comes with some useful functions such as ee.Model.fromAIPlatformPredictor() to make predictions on Earth Engine data directly from your model sitting on Google Cloud. Lastly, since your model now sits in the AI Platform, you can cheat and use your own models trained offline to predict on Earth Engine data and make maps of its output. Note that your model must be saved using tf.contrib.saved_model format if you wish to do so. The popular Keras function model.save_model('model.h5') is not compatible with ee.Model(). Moving forward, it seems like the team plans to stick to the Colab Python IDE for all deep learning applications. However, it’s not a death blow for the loved javascript code editor. At the summit, I saw that participants still preferred the javascript code editor for their non-neural based machine learning work (like support vector machines, random forests etc.). Being a python lover myself, I too go to the code editor for quick visualizations and for Earth Engine Apps! I did not get to try out the new ee.Model() package at the summit but Nick Clinton demonstrated a notebook where a simple working example has been hosted to help us learn the function calls. Some kinks still remain in the development— like limiting a convolution kernel to only 144 pixels wide during prediction because of “the way earth engine communicates with cloud platform” — but he assured us that it will be fixed soon. Overall, I am excited about the integration because Earth Engine is now a real alternative for my geospatial computing work. And with the Earth Engine team promising more new functions in the ee.Model() class, I wonder if companies and labs around the world will start migrating their modeling work to Earth Engine. Cooler Visualizations! Matt Hancher and Tyler Erickson displayed some new functionality related to visualizations and I found that it made it vastly simpler to make animated visuals. With ee.ImageCollection.getVideoThumbURL() function, you can create your own animated gifs within a few seconds! I tried it on a bunch of datasets and the speed of creating the gifs was truly impressive. Say bye to exporting each iteration of a video to your drive because these gifs appear right at the console using the print() command! Shown above is an example of global temperature forecast by time from the ‘NOAA/GFS0P25’ dataset. The code for making the gif can be found here. The animation is based on the example shown in the original blog post by Michael DeWitt and I referred to this gif-making tutorial on the developers page to make it. I did not get to cover all the new features and functionality introduced at the summit. For that, be on the lookout for event highlights on Google Earth’s blog. Meanwhile, you can check out the session resources from the summit for presentations and notebooks on topics that you are interested in. Presentation and resources Published in Medium
  3. 2 points
    just found this interesting articles on Agisoft forum : source: https://www.agisoft.com/forum/index.php?topic=7851.0
  4. 1 point
  5. 1 point
    Dapat data shp untuk peta multirawan se Indonesia. silahkan dicek https://drive.google.com/file/d/1anG5xcA9uMo1P9jLeppBvEXpaExJsLhk/view untested, lupa dapet darimana link ini
  6. 1 point
    Look forward to play mario and legend of zelda on those device 😁
  7. 1 point
    SELECT *, st_buffer(geom,50) as geom INTO gis_osm_pois_buf FROM gis_osm_pois; add * in your query to include all your fields. As your new geom field is came from the st_buffer named it a geom/geometry/the_geom or as you prefer.
  8. 1 point
    deck.gl (developed by Uber) is a WebGL-powered framework for visual exploratory data analysis of large datasets. deck.gl is designed to make visualization of large data sets simple. It enables users to quickly get impressive visual results with limited effort through composition of existing layers, while offering a complete architecture for packaging advanced WebGL based visualizations as reusable JavaScript layers. The basic idea of using deck.gl is to render a stack of visual overlays, usually (but not always) over maps. To make this simple concept work, deck.gl handles a number of challenges: Handling of large data sets and performant updates Interactive event handling such as picking Cartographic projections and integration with underlying map A catalog of proven, well-tested layers Easy to create new layers or customize existing layers Tutorials Getting started Uber's Vis.gl in Medium
  9. 1 point
    one of my favorite image hosting, , this is their announcement : Rest in Peace TinyPic
  10. 1 point
    Hi Darksabersan, the links are dead, could you reactivate and perhaps upload to a different site like Mega.nz, please? thanks
  11. 1 point
    TopoMapCreator (beta) A set of GIS tools that helps creating topographic map TopoMapCreatorThe TopoMapCreator consists of of 5 Programs: MapCreator, GeoToolsCmd, TopoMap, EcwToMobile and ExtendedMapCreator. More information for example about how to install it, you find under TopoMapCreator. Now read, what the 5 Programs are doing: 1. ExtendedMapCreatorExtendedMapCreator is a Desktop-Program, that creates "Topographic Maps" from OSM, NASA and ESA. You simply define a map extent by dragging over a browsable word map, click on start and wait till the GeoTIFF, ECW, GALILEO, ORUXMAPS or NAVIMAP files got created. ExtendedMapCreator is based on the Mapnik-Renderer, nevertheless all data downloading and processing is fully automatic. Click on ExtendedMapCreator to read more about the Program! 2. MapCreatorMapCreator is a GIS toolset. The tools have the common goal to create Topographic Maps. Currently it consists of 10 tools: The GeoreferencingTool georeferences scanned map series. The EcwHillshaderTool adds hillshades to a map. The SrtmHillshadesTool creates hillshades. The EcwToMobileTool converts a map to a Smartphone App Format. The GeonamesToShapeTool creates a shapefile from a GeoNames file. The ShapeToOsmTool creates an OSM file from shapefiles. The WarpEcwTool warps (reprojects) huge maps. The RussianMapsCreatorTool downloads and processes Russian maps. The QgisToEcwTool makes a Print-Screen of a qGis view. The USGSTopoMapTool downloads and processes USGS maps. Click on any of the tools to know more about it! 3. GeoToolsCmdGeoToolsCmd provides the same GIS toolset as the MapCreator, but accessible over the Command-Prompt. With GeoToolsCmd it is possible to write batch files. 4. TopoMapTopoMap is simple Desktop-Program to download specific Maps. 5. EcwToMobileEcwToMobile is a simple Desktop-Program to convert an ECW file to a Mobile App Format. The program is redundant to the EcwToMobileTool. darksabersan.
  12. 1 point
    I getting some images of a oil palm estate taken from a DIY drone. The image is stitch and mosaic. Has anyone done automated tree counting on these images. Seen some examples using eCognition but that was with multi-spectral images.
  13. 1 point
    It might not be a very good idea to use ArcGIS but somehow we can use this software for tree counting! My method is quite simple: 1. Get a piece of hi-res image from Google Earth 2. Open the image in ArcGIS 3. Make sure toolbar "Image Classification" is checked (turned on). 4. From "Image Classification" toolbar, select "Classification" --> "Iso Cluster Unsupervised Classification" command. 5. Chose "2" for "Number of Classes" in the dialogbox. By doing it, you will have only trees (or something else similar) and non-tree objects in the result. Run it and see the result (below). C'mon, it also shows "Google Earth" trademark on the result (LOL). 6. Now go to ArcGIS ToolBox, select "Conversion" --> "From Raster" --> "Raster to Polygon". In the dialog box, check to "Simplify Polygion (optional)" checkbox. 7. The result is as below 8. Mask the area where you want to count the trees, then count the number of polygons within that mask. Bingo!!!! If you want to do it better, you can do some pre-processing steps for your image. Also, when you have the polygon layer, you can try to simplify the layer again (using Eliminate, Integrate... functions) before counting trees. Enjoy ESRI.
  14. 1 point
    Oil palm tree counting is a quite simple application. You can simply use following methods: 1. Ravine extraction (Honda Kyioshi): Based on the fact that the top canopy is the brightest point. The ravine (or the valley between 2 trees) looks darker. Doing it, you wil be able to locate the central point of each tree. Then counting is some simple works to do aftewards. 2. Local threshold texture matching: Requires some programming tasks. Get the individual trees as samplings (with different sizes of canopy) and then match them over the whole image. The matching algorithm can be image correlation or histograming matching. I already did it with VB programming and it works perfectly. By doing this, you can also be able to map individual trees. All of this is based on RGB image. For better results, mask the plantation areas before running the procedure. Of course there are several other options to do. If you want to operate the procedure in an automatic way, the programming skill is needed.
  15. 1 point
    look at maxmax website, they do NIR conventions on cameras that can fit on drones. Did a canon compact with them 100%. Try imageJ (free software), run the NDVI settings you will get some data. Or use a blue filter on your camera, you will get usable images for NDVI. I got good data with dessert palms from RGB images in eCognition, but you have o play with the analysis a bit! Having a NIR layer will make life easy for you. Good Luck
  16. 1 point
    If you have a very-high resolution image (< 1m ) and it has only 3 spectral bands in the visible spectrum (Red, Green, Blue - RGB) with no additional infrared bands (NIR), the best option to extract trees is to use an OBIA method ( > eCognition is one of the best OBIA softwares). So, for your task, me I would use eCognition.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.