Jump to content

Lurker

Moderators
  • Content Count

    4,083
  • Joined

  • Last visited

  • Days Won

    337

Lurker last won the day on January 17

Lurker had the most liked content!

Community Reputation

2,234 Celebrity

About Lurker

  • Rank
    Associate Professor
  • Birthday 02/13/1983

Profile Information

  • Gender
    Male
  • Location
    INDONESIA
  • Interests
    GIS and Remote Sensing

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. January 3, 2020 - Recent Landsat 8 Safehold Update On December 19, 2019 at approximately 12:23 UTC, Landsat 8 experienced a spacecraft constraint which triggered entry into a Safehold. The Landsat 8 Flight Operations Team recovered the satellite from the event on December 20, 2019 (DOY 354). The spacecraft resumed nominal on-orbit operations and ground station processing on December 22, 2019 (DOY 356). Data acquired between December 22, 2019 (DOY 356) and December 31, 2019 (DOY 365) exhibit some increased radiometric striping and minor geometric distortions (see image below) in addition to the normal Operational Land Imager/Thermal Infrared Sensor (OLI/TIRS) alignment offset apparent in Real-Time tier data. Acquisitions after December 31, 2019 (DOY 365) are consistent with pre-Safehold Real-Time tier data and are suitable for remote sensing use where applicable. All acquisitions after December 22, 2019 (DOY 356) will be reprocessed to meet typical Landsat data quality standards after the next TIRS Scene Select Mirror (SSM) calibration event, scheduled for January 11, 2020. Landsat 8 Operational Land Imager acquisition on December 22, 2019 (path 148/row 044) after the spacecraft resumed nominal on-orbit operations and ground station processing. This acquisition demonstrates increased radiometric striping and minor geometric distortions observed in all data acquired between December 22, 2019 and December 31, 2019. All acquisitions after December 22, 2019 will be reprocessed on January 11, 2020 to achieve typical Landsat data quality standards. Data not acquired during the Safehold event are listed below and displayed in purple on the map (click to enlarge). Map displaying Landsat 8 scenes not acquired from Dec 19-22, 2019 Path 207 Rows 160-161 Path 223 Rows 60-178 Path 6 Rows 22-122 Path 22 Rows 18-122 Path 38 Rows 18-122 Path 54 Rows 18-214 Path 70 Rows 18-120 Path 86 Rows 24-110 Path 102 Rows 19-122 Path 118 Rows 18-185 Path 134 Rows 18-133 Path 150 Rows 18-133 Path 166 Rows 18-222 Path 182 Rows 18-131 Path 198 Rows 18-122 Path 214 Rows 34-122 Path 230 Rows 54-179 Path 13 Rows 18-122 Path 29 Rows 20-232 Path 45 Rows 18-133 After recovering from the Safehold successfully, data acquired on December 20, 2019 (DOY 354) and from most of the day on December 21, 2019 (DOY 355) were ingested into the USGS Landsat Archive and marked as "Engineering". These data are still being assessed to determine if they will be made available for download to users through all USGS Landsat data portals. source: https://www.usgs.gov/land-resources/nli/landsat/january-3-2020-recent-landsat-8-safehold-update
  2. Interesting video on How Tos: WebOpenDroneMap is a friendly Graphical User Interfase (GUI) of OpenDroneMap. It enhances the capabilities of OpenDroneMap by providing a easy tool for processing drone imagery with bottoms, process status bars, and a new way to store images. WebODM allows to work by projects, so the user can create different projects and process the related images. As a whole, WebODM in Windows is a implementation of PostgresSQL, Node, Django and OpenDroneMap and Docker. The software instalation requires 6gb of disk space plus Docker. It seem huge but it is the only way to process drone imagery in Windows using just open source software. We definitely see a huge potential of WebODM for the image processing, therefore we have done this tutorial for the installation and we will post more tutorial for the application of WebODM with drone images. For this tutorial you need Docker Toolbox installed on your computer. You can follow this tutorial to get Docker on your pc: https://www.hatarilabs.com/ih-en/tutorial-installing-docker You can visit the WebODM site on GitHub: https://github.com/OpenDroneMap/WebODM Videos The tutorial was split in three short videos. Part 1 https://www.youtube.com/watch?v=AsMSoWAToxE Part 2 https://www.youtube.com/watch?v=8GKx3fz0qgE Part 3 https://www.youtube.com/watch?v=eCZFzaXyMmA
  3. The coastal waters of the United States cover an area dwarfing the nation itself. Yet more than half of that ocean floor is a blank—unmapped by all but low-resolution satellite imagery. Now, the White House has announced a new push to examine these 11.6 million square kilometers of undersea territory. President Donald Trump this week signed a memorandum ordering federal officials to draft a new strategy that would accelerate federal efforts to map and explore these reaches. The 19 November declaration comes at a time of growing interest in mapping the world’s ocean floors. A consortium of scientists from around the world is working to create a complete, detailed picture of the global seabed by 2030. Nations are probing the ocean floor in search of valuable minerals, oil, and gas. In 2021, the United Nations will launch what it’s calling the decade of ocean science. The new federal initiative could help coordinate what has been a hodgepodge of mapping by industry, government, and academic researchers, says Vicki Ferrini, a marine geophysicist at Columbia University’s Lamont-Doherty Earth Observatory in Palisades, New York. “Having an overarching national coordinated strategy is, I think, going to be a game changer,” says Ferrini, who is part of the international Seabed 2030 campaign. That campaign is led by the Tokyo-based Nippon Foundation and the nonprofit General Bathymetric Chart of the Oceans in London. The new presidential memo directs the White House’s Ocean Policy Committee to, within 6 months, draft a strategy to map U.S. territorial waters, which stretch 320 kilometers from the coast. Today, roughly 40% of that area is charted, according to the National Oceanic and Atmospheric Administration (NOAA). It puts special emphasis on coastal waters around Alaska, where mapping is particularly sparse, and pressures including coastal erosion, climate change, and offshore oil exploration are converging. Detailed seafloor maps are vital to understanding earth and ocean dynamics, identify biological hot spots, and guide exploration for minerals and oil and gas, scientists say. For instance, Japanese scientists were able to reconstruct the forces that drove the devastating 2011 Tohoku earthquake in part because they had previously mapped the sea floor where the quake happened, says Charlie Paull, a marine geologist at the Monterey Bay Aquarium Research Institute in Moss Landing, California. By contrast, much less is known about the ocean floor off the U.S. Pacific Northwest, where scientists have identified a massive fault that could trigger a magnitude-9 quake. “It will be a woeful disgrace if the event happens and we haven’t done the first order of homework,” Paull says. The Trump administration’s ocean policies, first articulated in June 2018, have drawn criticism from conservation groups for emphasizing economic development, particularly offshore oil and gas exploration. But the interest in ocean mapping could give a boost to research, says Amy Trice of the Ocean Conservancy, a Washington, D.C., nonprofit devoted to ocean science and conservation. “The next step is: Are there going to be resources that actually go toward advancing the strategy? I think that’s the hope,” Trice says. In the past, the administration has sought to cut money for federal programs responsible for ocean mapping. Its 2020 budget request to Congress, for example, proposed a 16% cut NOAA programs that play a key role. (Congress has largely rejected those cuts, although it has not yet completed work on 2020 spending bills.) The new initiative has raised concerns it could weaken regulations on environmental impacts. In particular, the presidential order directs officials to “increase the efficiency” of permitting for ocean exploration and mapping. Conservationists in the Southeast and elsewhere have sued to block methods that use blasts of sounds to map the sea floor and the geology below it, arguing that such seismic techniques can harm sea life such as whales. And Sierra Weaver, a lawyer for the Southern Environmental Law Center in Chapel Hill, North Carolina, is leery about talk of making permitting for seafloor mapping more efficient. “We all know with this administration, when they say streamlining, what this really means is rollback,” she says. That’s not the intention, says a White House official with Office of Science and Technology Policy who is involved in ocean policy. The goal is to reduce red tape for scientific expeditions by federal scientists or researchers working with federal funding. “This is not about opening up a new avenue for expedited permitting” for oil and gas exploration said the official, who declined to be named. U.S. scientists are already pressing ahead. Federal scientists have spent the past 3 years creating better maps off the Pacific coast, in an effort to find deep ocean coral habitat, highlight faults that might trigger tsunamis, and examine spots were offshore wind turbines might be placed. Meanwhile, new technology promises to make ocean mapping faster and cheaper, Ferrini says. Desk-size drones or autonomous kayaks can cruise shallow ocean areas with special sonar. Torpedo-shaped vessels can plunge into the ocean deeps. Earlier this year, the XPRIZE Foundation awarded $7 million in a competition for autonomous tools to explore the deep ocean. A mapping project won. “In the world of ocean mapping,” Ferrini says, “we’re kind of at the brink of a big shift.” source: https://www.sciencemag.org/news/2019/11/trump-plan-push-seafloor-mapping-wins-warm-reception
  4. Two Chinese Beidou navigation satellites successfully launched Monday on top of a Long March 3B rocket, completing the core of China’s independent positioning and timing network ahead of the start of global service next year. The 184-foot-tall (56-meter) Long March 3B rocket lifted off from the Xichang space base in southwestern China’s Sichuan province at 0722 GMT (2:22 a.m. EST; 3:22 p.m. Beijing time) Monday, according to statements issued by the country’s top state-owned aerospace contractor. Four liquid-fueled boosters and a core stage — all fed by toxic hydrazine fuel — powered the Long March 3B away from a launch pad surrounded by hills painted with the colors of late autumn foliage. The rocket arced toward the southeast into a clear afternoon sky and shed its four boosters around two-and-a-half minutes into the flight. The core stage shut down and fell away moments later, giving way to the Long March 3B’s second stage. A twin-engine third stage, propelled by hydrogen-fueled engines, ignited to continue the trip into orbit before deploying a Yuanzheng upper stage, which finished the job of placing the two Beidou navigation satellites into their targeted circular orbit more than three hours later. The Beidou satellites launched Monday are orbiting Earth at an average altitude of 13,500 miles (21,800 kilometers), with an inclination of 55 degrees, according to tracking data published by the U.S. military. The successful launch means all 24 third-generation, or BDS-3, Medium Earth Orbit satellites for China’s Beidou navigation network have been sent into space since 2017, according the Chinese state-run Xinhua news agency. The BDS-3 spacecraft are the latest generation of China’s Beidou navigation satellites intended for worldwide service, following earlier missions designed for technology demonstrations or intermediate regional service. “BDS now has the full capacity for global service. It will be able to provide excellent navigation service to global users,” said Yang Changfeng, chief designed for the Beidou satellite navigation system, or BDS, according to Xinhua. The global Beidou system includes 24 satellites spread among three orbital planes in Medium Earth Orbit — like the spacecraft launched Monday — and six satellites in higher geosynchronous orbits more than 22,000 miles (nearly 36,000 kilometers) above Earth. Three of those are in inclined geosynchronous orbits, and three are kept stationary over the equator. The Beidou network is analogous to the U.S. military’s Global Positioning System and Russia’s Glonass navigation fleet. Europe is also building out a constellation of navigation satellites to provide global service. China has launched 53 Beidou satellites since 2000, including prototypes and older-generation spacecraft no longer in operation. Monday’s Long March 3B flight marked the 32nd orbital launch attempt of the year from China, and the 30th mission to successfully reach orbit in 2019. China has launched more orbital missions than any other country this year. source: https://spaceflightnow.com/2019/12/16/china-completes-core-of-beidou-global-satellite-navigation-system/
  5. While many advancements have been made this last decade in automated classification of above surface features using remote sensing data, progress for detecting underground features has lagged in this area. Technologies for detecting features, including ground penetrating radar, electrical resistivity, and magnetometry exist, but methods for feature extraction and identification mostly depend on the experience of instrument user. One problem has been creating approaches that can deal with complex signals. Ground penetrating radar (GPR), for instance, often produces ambiguous signals that can have a lot different noise interference relative to the feature one wants to identify. One approach has been to apply approximation polynomials to classify given signals that are then inputs for an applied neural networks model using derived coefficients. This technique can help reduce noise and differentiate signals that follow clear patterns that vary from larger background signals. Differentiation of signals based on minimized coefficients are one way to simplify and better differentiate data signals.[1] Another approach is to use multilayer perceptron that has a nonlinear activation function which transforms the data. This is effectively a similar technique but uses different transform functions than other neural network models. Applications of this approach include being able to differentiate thickness of underground structures from surrounding sediments and soil.[2] Other methods have been developed to determine the best location to place source and receivers that can capture relevant data. In seismic research, the use of convolutional neural networks (CNNs) has been applied to determine better positioning of sensors so that better data quality can be achieved. This has resulted in very high precision and recall rates at over 0.99. Using a series of filtered layers, signals can be assessed for their data quality with that of manually placed instruments. The quality of the placement can also be compared to other locations to see if the overall signal capture improves. Thus, rather than focusing on mainly signal processing, this method also focuses on signal placement and capture that compares to other placements to optimize data capture locations.[3] One problem in geophysical data is inversion, where data points are interpreted to be the opposite of what they are due to a reflective signal that may hid the nature of the true data. Techniques using CNNs have also been developed whereby the patterning of data signals around a given inversion can be filtered and assessed using activation functions. Multiple layers that transform and reduce data to specific signals helps to identify where patterns of data suggest an inversion is likely, while checking if this follows patterns from other data using Bayesian learning techniques.[4] source: https://www.gislounge.com/automated-remote-sensing-of-underground-features/
  6. Radiant Earth has launched Radiant MLHub, a cloud-based open library for training geospatial data used by machine learning algorithms. In launching the repository, Radiant Earth noted that while there is an abundance of satellite imagery, there is a lack of training data and tools to train machine learning algorithms. Radiant MLHub is a federated site for the discovery and access of high-quality Earth observation (EO) training datasets and machine learning models. Individuals and organizations can contribute by sharing their own training data and models with Radiant MLHub. The data and models available on Radiant MLHub are distributed under a Creative Commons license (CC BY 4.0). The site debuted with “crop type” training data for major crops in Kenya, Tanzania and Uganda supplied by the Radiant Earth Foundation. Future planned datasets include Global Land Cover and Surface Water as well as additions from the site’s partners. All of the datasets are stored using a SpatioTemporal Asset Catalog (STAC) compliant catalog. Per Radiant Earth: “Training datasets include pairs of imagery and labels for different types of ML problems including image classification, object detection, and semantic segmentation.” Users interested in accessing the site’s data and models can get started by downloading this how-to-guide. link:
  7. Interesting articles : North-South displacement field - 1999 Hector-Mine earthquake, California In complement to seismological records, the knowledge of the ruptured fault geometry and co-seismic ground displacements are key data to investigate the mechanics of seismic rupture. This information can be retrieved from sub-pixel correlation of optical images. We are investigating the use of SPOT (Satellite pour l'Observation de la Terre) satellites images. The technique developed here is attractive due to the operational status of a number of optical imaging programs and the availability of archived data. However, uncertainties on the imaging system itself and on its attitude dramatically limit its potential. We overcome these limitations by applying an iterative corrective process allowing for precise image registration that takes advantage of the availability of accurate Digital Elevation Models with global coverage (SRTM). This technique is thus a valuable complement to SAR interferometry which provides accurate measurements kilometers away from the fault but generally fails in the near-fault zone where the fringes get noisy and saturated. Comparison between the two methods is briefly discussed, with application on the 1992 Landers earthquake in California (Mw 7.3). Applications of this newly developped technique are presented: the horizontal co-seismic displacement fields induced by the 1999 Hector-Mine earthquake in California (Mw 7.1) and by the 1999 Chichi earthquake in Taiwan (Mw 7.5) have recently been retrieved using archive images. Data obtained can be downloaded (see further down) Latest Study Cases Sub-pixel correlation of optical images Following is the flow chart of the technique that as been developped. It allows for precise orthorectification and coregistration of the SPOT images. More details about the optimization process will be given in the next sections. Understanding the disparities measured from Optical Images Differences in geometry between the two images to be registered: - Uncertainties on attitudes parameters (roll, pitch, yaw) - Inaccuracy on orbital parameters (position, velocity) - Incidence angle differences + topography uncertainties (parallax effect) - Optical and Electronic biases (optical aberrations, CCD misalignment, focal length, sampling period, etc… ) » May account for disparities up to 800 m on SPOT 1,2,3,4 images; 50m for SPOT 5 (see [3]). Ground deformations: - Earthquakes, land slides, etc… » Typically subpixel scale: ranging from 0 to 10 meters. Temporal decorrelation: - Changes in vegetation, rivers, changes in urban areas, etc… » Correlation is lost: add noise to the measurement – up to 1m. » Ground deformations are largely dominated by the geometrical artifacts. Precise registration: geometrical corrections SPOT (Systeme pour l'Observation de la Terre) satellites are pushbroom imaging systems ([1],[2]): all optical parts remain fixed during acquisition and the scanning is accomplished by the forward motion of the spacecraft. Each line in the image is then acquired at a different time and submitted to the different variations of the platform. The orthorectification process consists in modeling and correcting these variations to produce cartographic distortion free images. It is then possible to accurately register images and look for their disparities using correlation techniques. Attitude variations (roll, pitch, and yaw) during the scanning process have to be integrated in the image model (see [1],[2]). Errors in correcting the satellite look directions will result in projecting the image pixels at the wrong location on the ground: important parallax artifacts will be seen when measuring displacement between two images. Exact pixel projection on the ground is achieved through an optimization algorithm that iteratively corrects the look directions by selecting ground control points. An accurate topography model has to be used. What parameters to optimize? - Initial attitudes values of the platform (roll, pitch, yaw), - Constant drift of the attitude values along the image acquisition, - Focal length (different value depending on the instrument , HRG1 – HRG2), - Position and velocity. How to optimize: Iterative algorithm using a set of GCPs (Ground Control Points). GCPs are generated automatically with a subpixel accuracy: they result from a correlation between an orthorectified reference frame and the rectified image whose parameters are to be optimized. A two stages procedure: - One of the image is optimized with respect to the shaded DEM (GCP are generated from the correlation with the shaded DEM). The DEM is then considered as the ground truth. No GPS points are needed. - The other image is then optimized using another set of GCP resulting from the correlation with the first image (co-registration). Measuring co-seismic deformation with InSAR, a comparison A fringe represents a near-vertical displacement of 2.8 cm SAR interferogram (ERS): near-vertical component of the ground displacement induced by the 1992 Landers earthquake [Massonnet et al., 1993]. No organized fringes in a band within 5-10 km of the fault trace: displacement sufficiently large that the change in range across a radar pixel exceeds one fringe per pixel, coherence is lost. http://earth.esa.int/applications/data_util/ndis/equake/land2.htm » SAR interferometry is not a suitable technique to measure near fault displacements The 1992 Landers earthquake revisited: Profile in offsets and elastic modeling show good agreement From: [6] - Measuring earthqakes from optical satellite images, Van Puymbroeck, Michel, Binet, Avouac, Taboury - Applied Optics Vol. 39, No 20, 10 July 2000 Other applications of the technique, see [4], [5]. » Fault ruptures can be imaged from this technique Applying the precise rectification algorithm + subpixel correlation: The 1999 Hector-Mine earthquake (Mw 7.1, California) Obtaining the Data (available in ENVI file Format. Load banbs as gray scale images. Bands are: N/S offsets, E/W offsets, SNR): Raw and filtered results: HectorMine.zip Pre-earthquake image: SPOT 4, acquisition date: 08-17-1998 Ground resolution: 10m Post-earthquake image: SPOT 2, acquisition date: 08-18-2000 Ground resolution: 10m Offsets measured from correlation: Correspond to sub-pixel offsets in the raw images. Correlation windows: 32 x 32 pixels 96m between two measurements So far we have: - A precise mapping of the rupture zone: the offsets field have a resolution of 96 m, - Measurements with a subpixel accuracy (displacement of at most 10 meters), - Improved the global georeferencing of the images with no GPS measurements, - Improved the processing time since the GCP selection is automatic, - Suppressed the main attitude artifacts. The profiles do not show any long wavelength deformations (See Dominguez et al. 2003) We notice: - Linear artifacts in the along track direction due to CCD misalignments, Schematic of a DIVOLI showing four CCD linear arrays. - Some topographic artifacts: the image resolution is higher than the DEM one, - Several decorrelations due to rivers and clouds, - High frequency noise due to the noise sensitivity of the Fourier correlator (See Van Puymbroeck et al.). Conclusion Subpixel correlation technique has been improved to overcome most of its limitations: » Precise rectification and co-registration of the images, » No more topographic effects (depending on the DEM resolution), » No need for GPS points – independent and automatic algorithm, » Better spatial resolution (See Van Puymbroeck et al.) To be improved: » Stripes due to the CCD’s misalignment, » high frequency noise from the correlator, » Process images with corrupted telemetry. » The subpixel correlation technique appears to be a valuable complement to SAR interferometry for ground deformation measurements. References: [1] SPOT 5 geometry handbook: ftp://ftp.spot.com/outgoing/SPOT_docs/geometry_handbook/S-NT-73-12-SI.pdf [2] SPOT User's Handbook Volume 1 - Reference Manual: ftp://ftp.spot.com/outgoing/SPOT_docs/SPOT_User's Handbook/SUHV1RM.PDF [3] SPOT 5 Technical Summary ftp://ftp.spot.com/outgoing/SPOT_docs/technical/spot5_tech_slides.ppt [4] Dominguez S., J.P. Avouac, R. Michel Horizontal co-seismic deformation of the 1999 Chi-Chi earthquake measured from SPOT satellite images: implications for the seismic cycle along the western foothills of Central Taiwan, J. Geophys. Res., 107, 10 1029/2001JB00482, 2003. [5] Michel, R. et J.P., Avouac, Deformation due to the 17 August Izmit earthquake measured from SPOT images, J. Geophys. Res., 107, 10 1029/2000JB000102, 2002. [6] Van Puymbroeck, N., Michel, R., Binet, R., Avouac, J.P. and Taboury, J. Measuring earthquakes from optical satellite images, Applied Optics Information Processing, 39, 23, 3486–3494, 2000. Publications: Leprince S., Barbot S., Ayoub F., Avouac, J.P. Automatic, Precise, Ortho-rectification and Co-registration for Satellite Image Correlation, Application to Seismotectonics. To be submitted. Conferences: F Levy, Y Hsu, M Simons, S Leprince, J Avouac. Distribution of coseismic slip for the 1999 ChiChi Taiwan earthquake: New data and implications of varying 3D fault geometry. AGU 2005 Fall meeting, San Francisco. M Taylor, S Leprince, J Avouac. A Study of the 2002 Denali Co-seismic Displacement Using SPOT Horizontal Offsets, Field Measurements, and Aerial Photographs. AGU 2005 Fall meeting, San Francisco. Y Kuo, F Ayoub, J Avouac, S Leprince, Y Chen, J H Shyu, Y Kuo. Co-seismic Horizontal Ground Slips of 1999 Chi-Chi Earthquake (Mw 7.6) Deduced From Image-Comparison of Satellite SPOT and Aerial Photos. AGU 2005 Fall meeting, San Francisco. source: http://www.tectonics.caltech.edu/geq/spot_coseis/
  8. Lurker

    Movevis

    please elaborate, what do you plan on using remote sensing data to make some animation? please add the details simple example for their functions can be see here: http://animove.org/wp-content/uploads/2019/04/Daniel_Palacios_animate_moveVis.html
  9. Google says it has built a computer that is capable of solving problems that classical computers practically cannot. According to a report published in the scientific journal Nature, Google's processor, Sycamore, performed a truly random-number generation in 200 seconds. That same task would take about 10,000 years for a state-of-the-art supercomputer to execute. The achievement marks a major breakthrough in the technology world's decadeslong quest to use quantum mechanics to solve computational problems. Google CEO Sundar Pichai wrote that the company started exploring the possibility of quantum computing in 2006. In classical computers, bits can store information as either a 0 or a 1 in binary notation. Quantum computers use quantum bits, or qubits, which can be both 0 and 1. According to Google, the Sycamore processor uses 53 qubits, which allows for a drastic increase in speed compared with classical computers. The report acknowledges that the processor's practical applications are limited. Google says Sycamore can generate truly random numbers without utilizing pseudo-random formulas that classical computers use. Pichai called the success of Sycamore the "hello world" moment of quantum computing. "With this breakthrough we're now one step closer to applying quantum computing to—for example—design more efficient batteries, create fertilizer using less energy, and figure out what molecules might make effective medicines," Pichai wrote. IBM has pushed back, saying Google hasn't achieved supremacy because "ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity." On its blog, IBM further discusses its objections to the term "quantum supremacy." The authors write that the term is widely misinterpreted. "First because, as we argue above, by its strictest definition the goal has not been met," IBM's blog says. "But more fundamentally, because quantum computers will never reign 'supreme' over classical computers, but will rather work in concert with them, since each have their unique strengths." News of Google's breakthrough has raised concerns among some people, such as presidential hopeful Andrew Yang, who believe quantum computing will render password encryption useless. Theoretical computer science professor Scott Aaronson refuted these claims on his blog, writing that the technology needed to break cryptosystems does not exist yet. The concept of quantum computers holding an advantage over classical computers has dated back to the early 1980s. In 2012, John Preskill, a professor of theoretical physics at Caltech, coined the term "quantum supremacy." source: https://www.npr.org/2019/10/23/772710977/google-claims-to-achieve-quantum-supremacy-ibm-pushes-back
  10. ndak perlu didownload menurut saya. malah langsung saja rovernya di proses langsung di webnya BIG ke sini: http://inacors.big.go.id/SBC/spider-business-center daftar aja, trus nanti data rovernya di upload aja, trus proses langsung disitu, malah langsung jadi datanya
  11. just found this interesting articles on Agisoft forum : source: https://www.agisoft.com/forum/index.php?topic=7851.0
  12. I have bunch of Yahoogroups , some related on GIS, Remote Sensing, astronomy, and etc, Yahoo announced that all Yahoo Groups will be shut down on Monday, October 21st, and all Groups content will be removed on December 14th. After October 21st, users will no longer be able to upload new content to groups, but it will still remain on the network. On December 14th, the following types of content will be removed from Yahoo Groups: Files Polls Links Photos Folders Calendar Database Attachments Conversations Email Updates Message Digest Message History Going forward, Yahoo Groups will become harder to join as any currently public group will now be restricted or private. They can still be found in a search, but users will have to submit a request to join them. In order to save content from a group before it’s removed, simply sign in to your Yahoo account and download the files directly from your group’s page. First you’ll have to request the data, then Yahoo will send an email when it’s ready for download. source: https://www.searchenginejournal.com/yahoo-to-shut-down-all-yahoo-groups-on-october-21st/330695/#close
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.