More

Where do I find the legend for netCDF files in QGIS?

Where do I find the legend for netCDF files in QGIS?


I'm using QGIS to make maps of the maize, rice and wheat cultivated area at global level. I found these great netCDF files: http://www.geog.mcgill.ca/landuse/pub/Data/175crops2000/NetCDF/

Here's an example of the maps that they can generate: Maps

I have loaded the maize, rice and wheat files as layers in QGIS, and get a rainbow of colors. Problem is, I have no idea what they mean. According to the legend on the map above, the color scale indicates "% of total area"; the same datasets were used - I think - for this map, where the legend indicates "Cassava area (Percent cropland)".

Both those legends are bit cryptic (and the colors I get in QGIS are around 40). So I'm trying to locate where in QGIS I can find the "official" legend and descriptions of what the colors indicate. I followed the steps here and the legend I got was just the names of the three layers, with no colors. If I try just one layer, I get the name of the layer and nothing else.


QGIS sees four bands inside the netcdf file, and tries to create an RGBA (red-green-blue-transparency) image out of it. This might make little sense.

You get a better result if you click on the layer, go to the style tab, and change the colouring from multicolour to one-band-pseudocolour. Then you can classify the data, and you will get a legend in the table of layers too.

You can select the band you want to be displayed, but there is not much info what the 4 bands should contain. You have to ask the authors of the data about that.


Chapter 4 Symbolizing features

Each color is a combination of three perceptual dimensions: hue, lightness and saturation.

4.1.1 Hue

Hue is the perceptual dimension associated with color names. Typically, we use different hues to represent different categories of data.

Figure 4.1: An example of eight different hues. Hues are associated with color names such as green, red or blue.

Note that magentas and purples are not part of the natural visible light spectrum instead they are a mix of reds and blues (or violets) from the spectrum’s tail ends.

4.1.2 Lightness

Lightness (sometimes referred to as value) describes how much light reflects (or is emitted) off of a surface. Lightness is an important dimension for representing ordinal/interval/ratio data.

Figure 4.2: Eight different hues (across columns) with decreasing lightness values (across rows).

4.1.3 Saturation

Saturation (sometimes referred to as chroma) is a measure of a color’s vividness. You can use saturated colors to help distinguish map symbols. But be careful when manipulating saturation, its property should be modified sparingly in most maps.

Figure 4.3: Eight different hues (across columns) with decreasing saturation values (across rows).


Combine multiple NetCDF files into timeseries multidimensional array python

I am using data from multiple netcdf files (in a folder on my computer). Each file holds data for the entire USA, for a time period of 5 years. Locations are referenced based on the index of an x and y coordinate. I am trying to create a time series for multiple locations(grid cells), compiling the 5 year periods into a 20 year period (this would be combining 4 files). Right now I am able to extract the data from all files for one location and compile this into an array using numpy append. However, I would like to extract the data for multiple locations, placing this into a matrix where the rows are the locations and the columns contain the time series precipitation data. I think I have to create a list or dictionary, but I am not really sure how to allocate the data to the list/dictionary within a loop.

I am new to python and netCDF, so forgive me if this is an easy solution. I have been using this code as a guide, but haven't figured out how to format it for what I'd like to do: Python Reading Multiple NetCDF Rainfall files of variable size

I put 3 files on dropbox so you could access them, but I am only allowed to post 2 links. Here are they are:


Where do I find the legend for netCDF files in QGIS? - Geographic Information Systems

Gridded or raster data is a spatial data format that consists of a matrix of cells, or pixels, organised into rows and columns where each cell contains a value for each grid point across a two dimensional surface. Gridded data covers an area, whereas station data you may obtain for a particular Bureau station applies only to a single geographic location.

The datasets are available in ARC ASCII format and can be ported into a GIS or similar spatial data visualisation tool. They are not suitable for use in Microsoft Excel or other spreadsheet programs. You can download a sample file containing gridded daily rainfall totals for Australia.

What is the ARC ASCII gridded data format?

The ARC ASCII gridded data format is a non-proprietary ASCII format that can be directly used in ESRI GIS software packages. The format consists of six lines of header information followed by the actual gridded data values. The header information shows the dimensions of the data, the geographic domain, cell resolution and the code for the 'nodata' value. Data are written row-wise, so that the first data record in a block contains values for the northernmost grid-cells moving from west to east. The last data record in a block contains values for the southernmost grid-cells moving from west to east.

How are data grids created?

Meteorological data is taken from the national climate database of the Bureau of Meteorology &ndash the Australian Data Archive for Meteorology (ADAM). These data come from observations ranging from ground stations, upper air observations, satellites, ships or buoys. Only quality controlled data are used.

A computer analysis technique then applies a weighted averaging process to the data which generates grid points across Australia. This grid-point analysis technique provides an objective average for each square of the grid and enables useful estimates in data-sparse areas such as central Australia.

Each grid-point represents a square area with sides which range from about 5 kilometres (0.05 degrees) on many of the rainfall and temperature products to 200 kilometres (2 degrees) on the tropical cyclone product. The size of the grids is limited by the density of data across Australia.

Maps are created from gridded data by analysing the grid and assigning contour lines in mapping software packages.

Each dataset comes with complete metadata which outlines the important characteristics of each product.

What is a shapefile? Shapefiles are a common spatial data format for representing geographical feature data such as State boundaries, streets, climate zones, locations, topography etc. A shapefile consists of several files: a main file (*.shp), an index file (*.shx), a dBASE table (*.dbf) and an optional projection file (.prj). The three files (*.shp), (*.shx) and (*dbf) are essential and allow you to view or use shapefiles. What is a coordinate system (datum), how are they used for climate data and maps?

A datum is a mathematical surface on which a mapping and coordinate system is based. Australia adopted the Geocentric Datum of Australia (GDA94) in January 2000.

Coordinate systems provide a common reference information (coordinates) to uniquely determine the position of a particular place or area on the surface of the earth. There are two common types of coordinate systems: geographic coordinate systems (GCS) and projected coordinate systems (PCS).

A geographic coordinate system (GCS) is a coordinate system that enables a particular location to be identified by a set of latitude and longitude degree units. A projected coordinate system is used to project maps of the earth's spherical surface onto a two-dimensional Cartesian Coordinate Plane i.e. to create flat maps of a curved surface.

Climate gridded datasets are created using a geographic coordinate system (GCS) on GDA 94 datum. Datasets using GCS coordinates can be used within different projection systems. The Bureau's climate maps are created using the Lambert Conformal Conic projection. This projection portrays the physical shape of Australia more accurately than other projection systems.

The Network Common Data Form (netCDF) was developed by UNIDATA and is an open standard self-describing, machine-independent data format that supports the creation, access, and sharing of array-oriented scientific datasets. It is commonly used in climatology, meteorology and oceanography and GIS applications. A netCDF file contains all of the metadata needed to extract and understand the data in the file.
Detailed information and tools for using and viewing netCDF files can be found at :
https://www.unidata.ucar.edu/software/netcdf/software.html

What are metadata? Geospatial metadata are contained in a summary document providing content, quality, type, creation, and spatial information about a dataset. They can be stored in any format such as a text file, Extensible Markup Language (XML), or database record. Metadata make spatial information more useful to all types of users by making it easier to document, locate datasets and tell users how to interpret and use data.


Data Resources: FAQ on Reading PSL netCDF files

At your site, your computer managers maintain two lists of domain names. One is the forward name and the other is the reverse. Often, only the forward name is kept up-to-date. That's the one that is used when you say you want to connect to your machine and give its name. The forward name translates back to the IP address. But, there is also a table that keeps track of the reverse name, the one that answers the question, what name belongs to this IP address.

Hopefully your system manager will understand from this message what's being asked and can correct the problem with a couple minutes of work. Another solution might be to try your ftp from a different machine. How can I find out more information regarding skin temperature? Over land and sea ice, the skin temperature is a prognostic variable. Over open water, the skin temperature is fixed at its initial value (from the Reynolds SST data). The Reynolds' SST analyses were done weekly and the reconstructed SST done monthly. The analyses were linearly interpolated to daily values which were used for all four analyses (i.e. 0, 6, 12 and 18Z have the same SST values).

The files that contain skin temperature 4 times daily are of the form:

The files that contain skin temperature daily averages are of the form:

The file that contains the monthly skin temperature values is:

Gridded Air Temperatures from NCEP/NCAR Reanalysis at PSL

pressure/air.YY.nc: Temperature (K) at 17 levels. Class A. (Page 463 Column 1)

surface/air.sig995.YY.nc: Temperature at the lowest sigma level (K). Class B. (Page 463 Column 1)

surface_gauss/air.2m.gauss.YY.nc: Temperature at 2 meters (K). Class B. (Page 464 Column 1)

surface_gauss/skt.sfc.gauss.YY.nc: Temperature at surface (skin temperature) K. Class B. (Page 464 Column 1)

tropopause/air.tropp.YY.nc: Temperature at the tropopause (K). Class A. (Page 463 Column 1)

Class A indicates that the analysis variable is strongly influenced by observed data and, hence, it is the most reliable class.

Class B indicates that, although there are observational data that directly affect the value of the variable, the model also has a very strong influence on the analysis value.

  1. First, go to the general search page to find the data you want to retrieve.
  2. Highlight (click on) the dataset(s) and/or variable(s) that are of interest.

Data Set: NCEP Variable: Sea Level Pressure

NOTE: If you make a plot it will be the AVERAGE of the time steps (and other ranges like the levels in the z direction), but the file contains the individual time steps.

How is the tropopause in the reanalysis computed? How is the tropopause identified? The tropopause in the Reanalysis is computed from temperature analysis on model sigma level fields. This is to avoid unnecessary interpolation from sigma to pressure. The definition of tropopause in NCEP's post-processor is as follows:

THE TROPOPAUSE IS IDENTIFIED BY THE LOWEST LEVEL ABOVE 450 MB WHERE THE TEMPERATURE LAPSE RATE -DT/DZ BECOMES LESS THAN 2 K/KM. THE TROPOPAUSE IS NOT ALLOWED HIGHER THAN 85 MB. INTERPOLATIONS OF VARIABLES TO TROPOPAUSE ARE DONE LINEARLY IN LOG OF PRESSURE.

No one has carefully looked at the product so that you may have to do some verification against Radiosonde observation before using it. It may be a little noisy, therefore, some spatial filtering may be necessary. What does the following statement mean: 'The data in the netCDF files are packed'? Most of the data in our netCDF files are packed. That is to say they have been transformed by a scale factor and an add offset to reduce the storage needed to two bytes per value. When you extract the short integers, you must unpack the data to recover the correct floating point data values. Data files that contain packed data will have a non-zero add offset and/or a scale factor not equal to 1.


4. Coordinate Types

Four types of coordinates receive special treatment by these conventions: latitude, longitude, vertical, and time. We continue to support the special role that the units and positive attributes play in the COARDS convention to identify coordinate type. We extend COARDS by providing explicit definitions of dimensionless vertical coordinates. The definitions are associated with a coordinate variable via the standard_name and formula_terms attributes. For backwards compatibility with COARDS use of these attributes is not required, but is strongly recommended.

Because identification of a coordinate type by its units is complicated by requiring the use of an external software package [UDUNITS] , we provide two optional methods that yield a direct identification. The attribute axis may be attached to a coordinate variable and given one of the values X , Y , Z or T which stand for a longitude, latitude, vertical, or time axis respectively. Alternatively the standard_name attribute may be used for direct identification. But note that these optional attributes are in addition to the required COARDS metadata.

Coordinate types other than latitude, longitude, vertical, and time are allowed. To identify generic spatial coordinates we recommend that the axis attribute be attached to these coordinates and given one of the values X , Y or Z . The values X and Y for the axis attribute should be used to identify horizontal coordinate variables. If both X- and Y-axis are identified, X-Y-up should define a right-handed coordinate system, i.e. rotation from the positive X direction to the positive Y direction is anticlockwise if viewed from above. We strongly recommend that coordinate variables be used for all coordinate types whenever they are applicable.

The methods of identifying coordinate types described in this section apply both to coordinate variables and to auxiliary coordinate variables named by the coordinates attribute (see Chapter 5, Coordinate Systems).

The values of a coordinate variable or auxiliary coordinate variable indicate the locations of the gridpoints. The locations of the boundaries between cells are indicated by bounds variables (see Section 7.1, "Cell Boundaries"). If bounds are not provided, an application might reasonably assume the gridpoints to be at the centers of the cells, but we do not require that in this standard.

4.1. Latitude Coordinate

Variables representing latitude must always explicitly include the units attribute there is no default value. The units attribute will be a string formatted as per the udunits.dat file. The recommended unit of latitude is degrees_north . Also acceptable are degree_north , degree_N , degrees_N , degreeN , and degreesN .

Application writers should note that the Udunits package does not recognize the directionality implied by the "north" part of the unit specification. It only recognizes its size, i.e., 1 degree is defined to be pi/180 radians. Hence, determination that a coordinate is a latitude type should be done via a string match between the given unit and one of the acceptable forms of degrees_north .

Optionally, the latitude type may be indicated additionally by providing the standard_name attribute with the value latitude , and/or the axis attribute with the value Y .

Coordinates of latitude with respect to a rotated pole should be given units of degrees , not degrees_north or equivalents, because applications which use the units to identify axes would have no means of distinguishing such an axis from real latitude, and might draw incorrect coastlines, for instance.

4.2. Longitude Coordinate

Variables representing longitude must always explicitly include the units attribute there is no default value. The units attribute will be a string formatted as per the udunits.dat file. The recommended unit of longitude is degrees_east . Also acceptable are degree_east , degree_E , degrees_E , degreeE , and degreesE .

Application writers should note that the Udunits package has limited recognition of the directionality implied by the "east" part of the unit specification. It defines degrees_east to be pi/180 radians, and hence equivalent to degrees_north . We recommend the determination that a coordinate is a longitude type should be done via a string match between the given unit and one of the acceptable forms of degrees_east .

Optionally, the longitude type may be indicated additionally by providing the standard_name attribute with the value longitude , and/or the axis attribute with the value X .

Coordinates of longitude with respect to a rotated pole should be given units of degrees , not degrees_east or equivalents, because applications which use the units to identify axes would have no means of distinguishing such an axis from real longitude, and might draw incorrect coastlines, for instance.

4.3. Vertical (Height or Depth) Coordinate

Variables representing dimensional height or depth axes must always explicitly include the units attribute there is no default value.

The direction of positive (i.e., the direction in which the coordinate values are increasing), whether up or down, cannot in all cases be inferred from the units. The direction of positive is useful for applications displaying the data. For this reason the attribute positive as defined in the COARDS standard is required if the vertical axis units are not a valid unit of pressure (a determination which can be made using the udunits routine, utScan) — otherwise its inclusion is optional. The positive attribute may have the value up or down (case insensitive). This attribute may be applied to either coordinate variables or auxillary coordinate variables that contain vertical coordinate data.

For example, if an oceanographic netCDF file encodes the depth of the surface as 0 and the depth of 1000 meters as 1000 then the axis would use attributes as follows:

If, on the other hand, the depth of 1000 meters were represented as -1000 then the value of the positive attribute would have been up . If the units attribute value is a valid pressure unit the default value of the positive attribute is down .

A vertical coordinate will be identifiable by:

the presence of the positive attribute with a value of up or down (case insensitive).

Optionally, the vertical type may be indicated additionally by providing the standard_name attribute with an appropriate value, and/or the axis attribute with the value Z .

4.3.1. Dimensional Vertical Coordinate

The units attribute for dimensional coordinates will be a string formatted as per the udunits.dat file. The acceptable units for vertical (depth or height) coordinate variables are:

units of pressure as listed in the file udunits.dat . For vertical axes the most commonly used of these include include bar , millibar , decibar , atmosphere (atm) , pascal (Pa) , and hPa .

units of length as listed in the file udunits.dat. For vertical axes the most commonly used of these include meter (metre, m) , and kilometer (km) .

other units listed in the file udunits.dat that may under certain circumstances reference vertical position such as units of density or temperature.

Plural forms are also acceptable.

4.3.2. Dimensionless Vertical Coordinate

The units attribute is not required for dimensionless coordinates. For backwards compatibility with COARDS we continue to allow the units attribute to take one of the values: level , layer , or sigma_level . These values are not recognized by the Udunits package, and are considered a deprecated feature in the CF standard.

For dimensionless vertical coordinates we extend the COARDS standard by making use of the standard_name attribute to associate a coordinate with its definition from Appendix D, Dimensionless Vertical Coordinates . The definition provides a mapping between the dimensionless coordinate values and dimensional values that can positively and uniquely indicate the location of the data. A new attribute, formula_terms , is used to associate terms in the definitions with variables in a netCDF file. To maintain backwards compatibility with COARDS the use of these attributes is not required, but is strongly recommended.

In this example the standard_name value atmosphere_sigma_coordinate identifies the following definition from Appendix D, Dimensionless Vertical Coordinates which specifies how to compute pressure at gridpoint (n,k,j,i) where j and i are horizontal indices, k is a vertical index, and n is a time index:

The formula_terms attribute associates the variable lev with the term sigma , the variable PS with the term ps , and the variable PTOP with the term ptop . Thus the pressure at gridpoint (n,k,j,i) would be calculated by

4.4. Time Coordinate

Variables representing time must always explicitly include the units attribute there is no default value. The units attribute takes a string value formatted as per the recommendations in the Udunits package [UDUNITS] . The following excerpt from the Udunits documentation explains the time unit encoding by example:

The acceptable units for time are listed in the udunits.dat file. The most commonly used of these strings (and their abbreviations) includes day (d) , hour (hr, h) , minute (min) and second (sec, s) . Plural forms are also acceptable. The reference time string (appearing after the identifier since ) may include date alone date and time or date, time, and time zone. The reference time is required. A reference time in year 0 has a special meaning (see Section 7.4, "Climatological Statistics").

Note: if the time zone is omitted the default is UTC, and if both time and time zone are omitted the default is 00:00:00 UTC.

We recommend that the unit year be used with caution. The Udunits package defines a year to be exactly 365.242198781 days (the interval between 2 successive passages of the sun through vernal equinox). It is not a calendar year. Udunits includes the following definitions for years: a common_year is 365 days, a leap_year is 366 days, a Julian_year is 365.25 days, and a Gregorian_year is 365.2425 days.

For similar reasons the unit month , which is defined in udunits.dat to be exactly year/12 , should also be used with caution.

A time coordinate is identifiable from its units string alone. The Udunits routines utScan() and utIsTime() can be used to make this determination.

Optionally, the time coordinate may be indicated additionally by providing the standard_name attribute with an appropriate value, and/or the axis attribute with the value T .

4.4.1. Calendar

In order to calculate a new date and time given a base date, base time and a time increment one must know what calendar to use. For this purpose we recommend that the calendar be specified by the attribute calendar which is assigned to the time coordinate variable. The values currently defined for calendar are:

Mixed Gregorian/Julian calendar as defined by Udunits. This is the default.

proleptic_gregorian

A Gregorian calendar extended to dates before 1582-10-15. That is, a year is a leap year if either (i) it is divisible by 4 but not by 100 or (ii) it is divisible by 400.

noleap or 365_day

Gregorian calendar without leap years, i.e., all years are 365 days long.

all_leap or 366_day

Gregorian calendar with every year being a leap year, i.e., all years are 366 days long.

All years are 360 days divided into 30 day months.

The calendar attribute may be set to none in climate experiments that simulate a fixed time of year. The time of year is indicated by the date in the reference time of the units attribute. The time coordinate that might apply in a perpetual July experiment are given in the following example.

Here, all days simulate the conditions of 15th July, so it does not make sense to give them different dates. The time coordinates are interpreted as 0, 1, 2, etc. days since the start of the experiment.

If none of the calendars defined above applies (e.g., calendars appropriate to a different paleoclimate era), a non-standard calendar can be defined. The lengths of each month are explicitly defined with the month_lengths attribute of the time axis:

A vector of size 12, specifying the number of days in the months from January to December (in a non-leap year).

If leap years are included, then two other attributes of the time axis should also be defined:

An example of a leap year. It is assumed that all years that differ from this year by a multiple of four are also leap years. If this attribute is absent, it is assumed there are no leap years.

A value in the range 1-12, specifying which month is lengthened by a day in leap years (1=January). If this attribute is not present, February (2) is assumed. This attribute is ignored if leap_year is not specified.

The calendar attribute is not required when a non-standard calendar is being used. It is sufficient to define the calendar using the month_lengths attribute, along with leap_year , and leap_month as appropriate. However, the calendar attribute is allowed to take non-standard values and in that case defining the non-standard calendar using the appropriate attributes is required.

The mixed Gregorian/Julian calendar used by Udunits is explained in the following excerpt from the udunits(3) man page:

Due to problems caused by the discontinuity in the default mixed Gregorian/Julian calendar, we strongly recommend that this calendar should only be used when the time coordinate does not cross the discontinuity. For time coordinates that do cross the discontinuity the proleptic_gregorian calendar should be used instead.

4.5. Discrete Axis

The spatiotemporal coordinates described in sections 4.1-4.4 are continuous variables, and other geophysical quantities may likewise serve as continuous coordinate variables, for instance density, temperature or radiation wavelength. By contrast, for some purposes there is a need for an axis of a data variable which indicates either an ordered list or an unordered collection, and does not correspond to any continuous coordinate variable. Consequently such an axis may be called “discrete”. A discrete axis has a dimension but might not have a coordinate variable. Instead, there might be one or more auxiliary coordinate variables with this dimension (see preamble to section 5). Following sections define various applications of discrete axes, for instance section 6.1.1 “Geographical regions”, section 7.3.3 “Statistics applying to portions of cells”, section 9.3 “Representation of collections of features in data variables”.


Where do I find the legend for netCDF files in QGIS? - Geographic Information Systems

Geologic Symbols for digital maps and GIS

Symbology is important in maps, as maps are inherently the symbolic representation, in scale, of the real world or a planet.

For years geologic maps have been produced by DTP programs, and geologic maps were mere drawings, as in the previous century, when paper was the only media available for map's distribution.

Nowadays, GIS and web-based sloppy maps are spread, and this requires geo-located geology.

However, we all like beautiful maps, and good looking geologic maps have correct and readable symbols. so herein we fill the gap!

WHERE THESE SYMBOLS COME FROM?

Some national-level geologic surveys published the symbol set they are using in their official map production. BGS and USGS did a great job in this sense.

Current published definitions are:

Andrea Nass and others implemented the section 25 - Planetary Geology Features for ESRI's software. In this project we are aligning the symbol-set to other formats supported by other software packages.

INTEROPERABILITY OF DIGITAL SYMBOLS

Software developments bring also development of different digital formats for files. This is valid also for symbology, and so as 2016 there is not a simple solution that will work for every GIS/mapping package around. Hopefully the OpenGis consortium is providing specification for an interoperable open format that will make life easier for everyone once it will be implemented in every software package. But until this is not true, we can keep in sync different formats, that will dress the maps following official specifications (in our case the FGDC one)

DIRECTORIES OF THIS PROJECT

The directories of this project are orienteed to the software package and format definition.

  • QGis: QGis' format, based on XML
  • ESRI: ESRI's own format
  • SVG: The scalable vector format is supported by QGis.
  • SE/SLD: The OpenGIS® Symbology Encoding Standard provides a way to describe symbology indipendentley from the software being used. SLD allows to apply Symbology Encoding (SE) to the maps. QGis and ESRI's ArcServer supports SLD
  • docs: Documents directory, where the pdf of FGDC is kept and also other support documents.

Development for a library of geologic symbols for QGis is in the geologic-symbols-qgis repository, were you will find istructions on how to install the library in your computer.

Follow the instructions in this paper:

  • A. Nass, S. van Gasselt, R. Jaumann, H. Asche, Implementation of cartographic symbols for planetary mapping in geographic information systems, Planetary and Space Science, Volume 59, Issues 11-12, September 2011, Pages 1255-1264, ISSN 0032-0633, http://dx.doi.org/10.1016/j.pss.2010.08.022. (http://www.sciencedirect.com/science/article/pii/S0032063310002606)

We are currently working on the packages above. Contributions are enthusiastically welcome.

The problem of having a meaningful symbology in modern softwares has been in the air from a while. Similarly to this project some other example have been available and are actively creating interesting solutions.

All the symbols developed here are distributed with a Creative Common 3.0 Attribution 3.0 Unported (CC BY 3.0) license.

This means that you can use, copy, distribute and improve this symbols BUT you might be kind to cite our work as reported in the legal statement below:

2012-2016 (c) Andrea Nass, Alessandro Frigeri

WHY DO I HAVE TO CITE THIS WORK?

Because, we are working for you! Seriously, developing this it is a time-consuming process - but we believe this project will results in beautiful and more understandable maps abroad.

Use of digital material of this site:

  • Andrea Nass and Alessandro Frigeri. Geologic Symbols for digital mapping and GIS. Retrieved [Today's Date] from https://github.com/afrigeri/geologic-symbols

ESRI geologic symbols for planetary have been produced by Andrea Nass, and described in a scientific paper:


6.2 How to create a good map

Here’s an example of a map layout that showcases several bad practices.

Figure 6.2: Example of a bad map. Can you identify the problematic elements in this map?

A good map establishes a visual hierarchy that ensures that the most important elements are at the top of this hierarchy and the least important are at the bottom. Typically, the top elements should consist of the main map body, the title (if this is a standalone map) and a legend (when appropriate).

When showcasing Choropleth maps, it’s best to limit the color swatches to less than a dozen–it becomes difficult for the viewer to tie too many different colors in a map to a color swatch element in the legend. Also, classification breaks should not be chosen at random but should be chosen carefully for example adopting a quantile classifications scheme to maximize the inclusion of the different color swatches in the map or a classification system designed based on logical breaks (or easy to interpret breaks) when dictated by theory or cultural predisposition.

Scale bars and north arrows should be used judiciously and need not be present in every map. These elements are used to measure orientation and distances. Such elements are critical in reference maps such as USGS Topo maps and navigation maps but serve little purpose in a thematic map where the goal is to highlight differences between aerial units. If, however, these elements are to be placed in a thematic map, reduce their visual prominence (see Figure 6.3 for examples of scale bars). The same principle applies to the selection of an orientation indicator (north arrow) element. Use a small north arrow design if it is to be placed low in the hierarchy, larger if it is to be used as a reference (such as a nautical chart).

Figure 6.3: Scale bar designs from simplest (top) to more complex (bottom). Use the simpler design if it’s to be placed low in the visual hierarchy.

  • Title and other text elements should be concise and to the point. If the map is to be embedded in a write-up such as a journal article, book or web page, title and text(s) elements should be omitted in favor of figure captions and written description in the accompanying text.

Following the aforementioned guidelines can go a long way in producing a good map. Here, a divergent color scheme is chosen whereby the two hues converge to the median income value. A coordinate system that minimizes distance error measurements and that preserves “north” orientation across the main map’s extent is chosen since a scale bar and north arrow are present in the map. The inset map (lower left map body) is placed lower in the visual hierarchy and could be omitted if the intended audience was familiar with the New England area. A unique (and unconventional) legend orders the color swatches in the order in which they appear in the map (i.e. following a strong north-south income gradient).

Figure 6.4: Example of an improved map.


Where do I find the legend for netCDF files in QGIS? - Geographic Information Systems

GeoNB is the Province of New Brunswick’s gateway to geographic information and related value-added applications.

  • Providing all users with easy access to geographic data, value-added applications and maps
  • Reducing duplication and costs through collaboration and the sharing of geographic data and infrastructure
  • Promoting and increasing the use of geographic data and maps

Server Status:

On October 1, 2020, Service New Brunswick issued the first separate Property Assessment Notice, as recommended in the 2017 special report into property assessment by New Brunswick’s Auditor General. [more]

Key improvements include: • Mobile friendly design, • Support for Apple and Android devices. [more]

The Province of New Brunswick has released over 23,000 square kilometers of LiDAR data acquired in 2018. This latest. [more]

The Province of New Brunswick has launched a new online web application making high resolution digital aerial imagery available for download. [more]

The Province of New Brunswick has launched a new online web application making provincial elevation data available. [more]

The Province of New Brunswick has released an additional 24,927 square kilometers of LiDAR data. [more]

A new app will give New Brunswickers access to flood forecast data for the Saint John River Basin on their mobile device. [more]

GeoNB was a gold sponsor and exhibitor at the first annual National Geomatics Competition, February 16 to 18. [more]

Natural Resources Canada has produced a set of High Resolution Digital Elevation Models (HRDEM) for New Brunswick. [more]


Subject: 6) Why isn't my favorite format on this list?

If you don't see a format you're interested in here, it could be one of three reasons. First of all, there are a lot of formats which are out of the scope of this newsgroup: it ain't named sci.data.formats for nuthin', you know. Formats used in commercial spreadsheet and word-processing software aren't scientific data formats, and aren't discussed in this group.

Second, it may be that nobody has given the FAQ organizer any information on sources for information on that format. So ask the newsgroup -- and if you do get a response, please let me know what it is!

Finally, you may ask on the net, and hear nothing, because the data format description just isn't publicly available. For most scientific data formats, this is a Bad Thing, and most archivists and scientists want to have their format information available. If you have such information, but don't have resources to make it available, please ask around and see if you can get it into an FTP area or other resource. Please don't publicize private or proprietary formats without the permission of the author, though. This page generated from text FAQ Fri Oct 13 11:04:55 MDT 1995 by automatic process


Watch the video: SNAP - Product Export