Jump to content

Lurker

Moderators
  • Content Count

    4,008
  • Joined

  • Last visited

  • Days Won

    283

Lurker last won the day on July 16

Lurker had the most liked content!

Community Reputation

2,091 Celebrity

About Lurker

  • Rank
    Associate Professor
  • Birthday 02/13/1983

Profile Information

  • Gender
    Male
  • Location
    INDONESIA
  • Interests
    GIS and Remote Sensing

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. true, Huawei already lead in their enterprise business, for example the first 5G implementation if they can make this Hongmeng a.k.a Harmony OS success, then they sure will be another titan in mobile business
  2. yes, the hardest part is build apps ecosystem for developers, make all the developers interesting with this new OS and build apps for it
  3. Huawei Business Group CEO Richard Yu announced the tech giant’s newest operating system HarmonyOS during the Huawei Developers’ Conference in Dongguan, China. According to reports, the microkernel-based distributed operating system is set to launch later this 2019 for smart screen products such as TVs, smart watches and in-vehicle infotaiment systems. As the company seeks to lessen its dependence on American businesses, Huawei plans to expand HarmonyOS’ coverage in the next three years to smartphones and other devices. During the conference, Yu claimed that HarmonyOS is “more powerful and secure than Android,” adding that its IPC performance is five times better than Google Fuchsia’s and that its microkernel has “one-thousandth the amount of code in the Linux kernel.” “A modularized HarmonyOS can be nested to adapt flexibly to any device to create a seamless cross-device experience. Developed via the distributed capability kit, it builds the foundation of a shared developer ecosystem,” Huawei mentioned in a statement, revealing that it began exploring the idea of its own operating system a decade ago. Although Huawei will continue to use Android for their devices, HarmonyOS will serve as a fallback in case of emergencies. “We will prioritize Android for smartphones, but if we can’t use Android, we will be able to install HarmonyOS quickly,” Yu said. source: https://hypebeast.com/2019/8/huawei-unveils-harmonyos-richard-yu
  4. sorry for the delay, now fixed
  5. you simply put the URL or IP address of the server that serve the images something like this http://desktop.arcgis.com/en/arcmap/10.3/manage-data/using-arccatalog/connecting-to-gis-servers.htm if you insert the correct address, the services will automatically populate in the windows below
  6. from their official sites : NOAA’s flagship weather model — the Global Forecast System (GFS) — is undergoing a significant upgrade today to include a new dynamical core called the Finite-Volume Cubed-Sphere (FV3). This upgrade will drive global numerical weather prediction into the future with improved forecasts of severe weather, winter storms, and tropical cyclone intensity and track. NOAA research scientists originally developed the FV3 as a tool to predict long-range weather patterns at time frames ranging from multiple decades to interannual, seasonal and subseasonal. In recent years, creators of the FV3 at NOAA’s Geophysical Fluid Dynamics Laboratory expanded it to also become the engine for NOAA’s next-generation operational GFS. “In the past few years, NOAA has made several significant technological leaps into the future – from new satellites in orbit to this latest weather model upgrade,” said Secretary of Commerce Wilbur Ross. “Through the use of this advanced model, the dedicated scientists, forecasters, and staff at NOAA will remain ever-alert for any threat to American lives and property.” The FV3-based GFS brings together the superior dynamics of global climate modeling with day-to-day reliability and speed of operational numerical weather prediction. Additional enhancements to the science that produce rain and snow in the GFS also contribute to the improved forecasting capability of this upgrade. “The significant enhancements to the GFS, along with creating NOAA’s new Earth Prediction Innovation Center, are positioning the U.S. to reclaim international leadership in the global earth-system modeling community,” said Neil Jacobs, Ph.D., acting NOAA administrator. The GFS upgrade underwent rigorous testing led by NOAA’s National Centers for Environmental Prediction (NCEP) Environmental Modeling Center and NCEP Central Operations that included more than 100 scientists, modelers, programmers and technicians from around the country. With real-time evaluations for a year alongside the previous version of the GFS, NOAA carefully documented the strengths of each. When tested against historic weather dating back an additional three years, the upgraded FV3-based GFS performed better across a wide range of weather phenomena. The scientific and performance evaluation shows that the upgraded FV3-based GFS provides results equal to or better than the current global model in many measures. This upgrade establishes the foundation to further advancements in the future as we improve observation quality control, data assimilation, and the model physics. “We are excited about the advancements enabled by the new GFS dynamical core and its prospects for the future,” said Louis W. Uccellini, Ph.D., director, NOAA’s National Weather Service. “Switching out the dynamical core will have significant impact on our ability to make more accurate 1-2 day forecasts and increase the level of accuracy for our 3-7 day forecasts. However, our job doesn't end there — we also have to improve the physics as well as the data assimilation system used to ingest data and initialize the model.” Uccellini explained that NOAA’s work with the National Center for Atmospheric Research to build a common infrastructure between the operational and research communities will help advance the FV3-based GFS beyond changing the core. “This new dynamical core and our work with NCAR will accelerate the transition of research advances into operations to produce even more accurate forecasts in the future,” added Uccellini. Operating a new and sophisticated weather model requires robust computing capacity. In January 2018, NOAA augmented its weather and climate supercomputing systems to increase performance by nearly 50 percent and added 60 percent more storage capacity to collect and process weather, water and climate observations. This increased capacity enabled the parallel testing of the FV3-based GFS throughout the year. The retiring version of the model will no longer be used in operations but will continue to run in parallel through September 2019 to provide model users with data access and additional time to compare performance. source: https://www.noaa.gov/media-release/noaa-upgrades-us-global-weather-forecast-model
  7. Meterology revolves as much around good weather models as it does good weather data, and the core US model is about to receive a long-overdue refresh. NOAA has upgraded its Global Forecast System with a long-in-testing dynamical core, the Finite-Volume Cubed-Sphere (aka FV3). It's the first time the model has been replaced in roughly 40 years, and it promises higher-resolution forecasts, lower computational overhead and more realistic water vapor physics. The results promise to be tangible. NOAA believes there will be a "significant impact" to one- and two-day forceasts, and improve the overall accuracy for forecasts up to a week ahead. It also hopes for further improvements to both the physics as well as the system that ingests data and invokes the weather model. This is on top of previous upgrades to NOAA supercomputers that should provide more capacity. The old model will still hang around through September, although not as a backup -- it's strictly there for data access and performance comparisons. FV3 had been chosen years ago to replace the old core, and it's been in parallel testing for over a year. Not everyone is completely satisfied with the new model. Ars Technica pointed out that the weather community is concerned about surface temperatures that have skewed low, for instance. It should be more accurate overall, though, and that could be crucial for tracking hurricanes, blizzards and other serious weather patterns that can evolve by the hour. source: https://www.engadget.com/2019/06/12/us-weather-forecast-model-update
  8. multifunction casing. you can run 3d games and grating cheese for your hamburger. excelent thought apple as always LOL
  9. We are all already familiar with GPS navigation outdoors and what wonders it does not only for our everyday life, but also for business operations. Outdoor maps, allowing for navigation via car or by foot, have long helped mankind to find even the most remote and hidden places. Increased levels of efficiency, unprecedented levels of control over operational processes, route planning, monitoring of deliveries, safety and security regulations and much more have been made possible. Some places are, however, harder to reach and navigate than others. For instance, places like big indoor areas – universities, hospitals, airports, convention centers or factories, among others. Luckily, that struggle is about to become a thing of the past. So what’s the solution for navigating through and managing complex indoor buildings? Indoor Mapping and Visualization with ArcGIS Indoors The answer is simple – indoor mapping. Indoor mapping is a revolutionary concept that visualizes an indoor venue and spatial data on a digital 2D or 3D map. Showing places, people and assets on a digital map enables solutions such as indoor positioning and navigation. These, in turn, allow for many different use cases that help companies optimize their workflows and efficiencies. Mobile Navigation and Data The idea behind this solution is the same as outdoor navigation, only instead it allows you to see routes and locate objects and people in a closed environment. As GPS signals are not available indoors, different technology solutions based on either iBeacons, WiFi or lighting are used to create indoor maps and enable positioning services. You can plan a route indoors from point A to point B with customized pins and remarks, analyze whether facilities are being used to their full potential, discover new business opportunities, evaluate user behaviors and send them real-time targeted messages based on their location, intelligently park vehicles, and the list goes on! With the help of geolocation, indoor mapping stores and provides versatile real-time data on everything that is happening indoors, including placements and conditions of assets and human movements. This allows for a common operating picture, where all stakeholders share the same level of information and insights into internal processes. Having a centralized mapping system enables effortless navigation through all the assets and keeps facility managers updated on the latest changes, which ultimately improves business efficiency. Just think how many operational insights can be received through visualizations of assets on your customized map – you can monitor and analyze the whole infrastructure and optimize the performance accordingly. How to engage your users/visitors at the right time and place? What does it take to improve security management? Are the workflow processes moving seamlessly? Answers to those and many other questions can be found in an indoor mapping solution. Interactive indoor experiences are no longer a thing of the future, they are here and now. source: https://www.esri.com/arcgis-blog/products/arcgis-indoors/mapping/what-is-indoor-mapping/
  10. found this interesting tutorial : For the last couple years I have been testing out the ever-improving support for parallel query processing in PostgreSQL, particularly in conjunction with the PostGIS spatial extension. Spatial queries tend to be CPU-bound, so applying parallel processing is frequently a big win for us. Initially, the results were pretty bad. With PostgreSQL 10, it was possible to force some parallel queries by jimmying with global cost parameters, but nothing would execute in parallel out of the box. With PostgreSQL 11, we got support for parallel aggregates, and those tended to parallelize in PostGIS right out of the box. However, parallel scans still required some manual alterations to PostGIS function costs, and parallel joins were basically impossible to force no matter what knobs you turned. With PostgreSQL 12 and PostGIS 3, all that has changed. All standard query types now readily parallelize using our default costings. That means parallel execution of: Parallel sequence scans, Parallel aggregates, and Parallel joins!! TL;DR: PostgreSQL 12 and PostGIS 3 have finally cracked the parallel spatial query execution problem, and all major queries execute in parallel without extraordinary interventions. What Changed With PostgreSQL 11, most parallelization worked, but only at much higher function costs than we could apply to PostGIS functions. With higher PostGIS function costs, other parts of PostGIS stopped working, so we were stuck in a Catch-22: improve costing and break common queries, or leave things working with non-parallel behaviour. For PostgreSQL 12, the core team (in particular Tom Lane) provided us with a sophisticated new way to add spatial index functionality to our key functions. With that improvement in place, we were able to globally increase our function costs without breaking existing queries. That in turn has signalled the parallel query planning algorithms in PostgreSQL to parallelize spatial queries more aggressively. Setup In order to run these tests yourself, you will need: PostgreSQL 12 PostGIS 3.0 You’ll also need a multi-core computer to see actual performance changes. I used a 4-core desktop for my tests, so I could expect 4x improvements at best. The setup instructions show where to download the Canadian polling division data used for the testing: pd a table of ~70K polygons pts a table of ~70K points pts_10 a table of ~700K points pts_100 a table of ~7M points We will work with the default configuration parameters and just mess with the max_parallel_workers_per_gather at run-time to turn parallelism on and off for comparison purposes. When max_parallel_workers_per_gather is set to 0, parallel plans are not an option. max_parallel_workers_per_gather sets the maximum number of workers that can be started by a single Gather or Gather Merge node. Setting this value to 0 disables parallel query execution. Default 2. Before running tests, make sure you have a handle on what your parameters are set to: I frequently found I accidentally tested with max_parallel_workers set to 1, which will result in two processes working: the leader process (which does real work when it is not coordinating) and one worker. show max_worker_processes; show max_parallel_workers; show max_parallel_workers_per_gather; Aggregates Behaviour for aggregate queries is still good, as seen in PostgreSQL 11 last year. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT Sum(ST_Area(geom)) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. Scans The simplest spatial parallel scan adds a spatial function to the target list or filter clause. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT ST_Area(geom) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. This query did not work out-of-the-box with PostgreSQL 11. Gather (cost=1000.00..27361.20 rows=69534 width=8) Workers Planned: 3 -> Parallel Seq Scan on pd (cost=0.00..19407.80 rows=22430 width=8) Joins Starting with a simple join of all the polygons to the 100 points-per-polygon table, we get: SET max_parallel_workers_per_gather = 4; EXPLAIN SELECT * FROM pd JOIN pts_100 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..837378459.28 rows=5322553884 width=2579) Workers Planned: 4 -> Nested Loop (cost=0.28..305122070.88 rows=1330638471 width=2579) -> Parallel Seq Scan on pts_100 pts (cost=0.00..75328.50 rows=1738350 width=40) -> Index Scan using pd_geom_idx on pd (cost=0.28..175.41 rows=7 width=2539) Index Cond: (geom && pts.geom) Filter: st_intersects(geom, pts.geom) The only quirk in this plan is that the nested loop join is being driven by the pts_100 table, which has 10 times the number of records as the pd table. The plan for a query against the pt_10 table also returns a parallel plan, but with pd as the driving table. EXPLAIN SELECT * FROM pd JOIN pts_10 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we still get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..85251180.90 rows=459202963 width=2579) Workers Planned: 3 -> Nested Loop (cost=0.29..39329884.60 rows=148129988 width=2579) -> Parallel Seq Scan on pd (cost=0.00..13800.30 rows=22430 width=2539) -> Index Scan using pts_10_gix on pts_10 pts (cost=0.29..1752.13 rows=70 width=40) Index Cond: (geom && pd.geom) Filter: st_intersects(pd.geom, geom) source: http://blog.cleverelephant.ca/2019/05/parallel-postgis-4.html
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.