Open access peer-reviewed chapter - ONLINE FIRST

Usability of Open Data

Written By

Dharmender Salian

Submitted: 17 July 2023 Reviewed: 27 July 2023 Published: 10 November 2023

DOI: 10.5772/intechopen.1003269

Open-Source Horizons - Challenges and Opportunities for Collaboration and Innovation IntechOpen
Open-Source Horizons - Challenges and Opportunities for Collabora... Edited by Laura M. Castro

From the Edited Volume

Open-Source Horizons - Challenges and Opportunities for Collaboration and Innovation [Working Title]

Laura M. Castro

Chapter metrics overview

38 Chapter Downloads

View Full Metrics

Abstract

Open data (OD) is the term used to describe the concept that data is available freely for people, entrepreneurs, and researchers for analysis and research. Globally, Governments have taken initiatives to publish public data. Researchers and entrepreneurs wanting to do data analysis need to be trained in data management as the quality and accessibility of open data datasets make the activity a bit challenging. OD initiatives require considerable resources that include financial, technical, and human resources. Using concepts of structuredness of data, a dataset usability measurement is created. Utilizing a randomly chosen set of datasets from a well-known open data portal, an instrument is developed, validated, and applied. The chapter ends with explaining future research directions and giving recommendations for distributors of open data datasets.

Keywords

  • usability
  • quality
  • reuse
  • data
  • citizen

1. Introduction

On 7 December 2007, a meeting was held with 30 individuals in Sebastopol, California, to review how to define “open public data” [1]. This was the initial effort to define “open Public data”. They wrote eight principles of open government data.

  • Complete: it has relevant details

  • Primary: data is gathered at the source

  • Timely: data is made available soon after collecting

  • Accessible: data is convenient for a wide range of users and purposes.

  • Machine processable: automatic processing is possible.

  • Non-discriminatory: any person can use it

  • Non-proprietary: not protected by any enterprise

  • License-free: data is not regulated.

A year later, the memoranda on open data were signed by US President Obama. Open data is data that anybody can use, access, and share. Open data turns into usable while made accessible in a general machine-readable format. People should be allowed to use the data as they want, including sharing or transforming. There are questions about open data vs. open government data vs. public data. Open government data is open data when government influence is minimal regarding the re-use of data [2]. Open data is available to the public, entrepreneurs, and researchers so that they can create new services and products and have successful businesses [3]. There are benefits of open data which can vary from commercial value through innovation to increased government effectiveness. Entrepreneurs using open data help in the open data movement and help in exploiting the full potential of open data. The public should be encouraged to use open data and turn ideas into successful businesses. New insights and knowledge can be obtained from open data by making it easily accessible and transparent.

There is a need to involve researchers, students, and citizens to promote open data and boost open data development. Open data is available in portals and interoperability is most important. Interoperability denotes the ability of various systems and groups to work collectively (inter-operate). In this situation, it’s far the capacity to interoperate—or intermix—different datasets. Interoperability means the capability to access data from two or more sources and integrate that data for further usage. Interoperability is essential because it permits different components to work together. In some portals, data is of low quality, has sensitive information, and has issues with interoperability [4].

Governments globally are taking initiatives like Data.gov and Data.gov.uk. These initiatives are helping the goals of the open data movement. The open government suggests the government is collaborative and efficient [5]. Linked open data is also open data that is linked data or structured data, which is interlinked with other data, so it turns beneficial via semantic queries. Semantic queries assist data procurement in a programmatic fashion. Public or private organizations can often control access or re-use of data through the license, and this acts as a barrier to open data movement. Organizations have a license to ascertain the status of the data set and otherwise, this may limit the use of the open data. Open data is about being accessible, available, and reusable. For an organization, data availability refers to how often data is available to be used. Entrepreneurs are mostly developing products and services. This will require reusable data so it can be transformed as per the requirements. Data accessibility is the degree to which government data is supplied in open and reusable formats, along with associated metadata. Though a lot of open government data is available on portals, awareness of its existence and usefulness is limited to citizens hence need to raise awareness [6].

Data transparency is the attribute of data being used with integrity, and lawfully, for legitimate purposes. Increased accountability and decreased corruption are the advantages of increased transparency. Detection and reduced corruption are found if scrutiny and monitoring are given their due prominence. Organization officials will be averse to doing anything wrong if they know that they are being watched. Inspection at every stage should be mandatory in organizations. Officials should encourage transparency from top to bottom. Increased transparency needs clarity and discussion with other officials so all know where the boundaries are, and anything beyond will be punished. Transparency leads to increased trust and better employee engagement. Suspicion of illegal activity and bad practices can lead to barriers built up against an organization and its products which may lead to a boycott. Trust is essential for an organization. Trust can be from employees, customers, vendors, and other stakeholders. Good relationships and long-term bonds are possible with trust and good dealings. In today’s world news moves fast whether on social media or other sources and it can make or break an organization’s reputation in the market. Customers value dealing with organizations that have a clean chit and are ethically business minded. Transparency in internal workings and in business dealings with customers and vendors will lead to better relations with all stakeholders.

Open data becomes useful when a machine can utilize it and human can understand it. Thus, open data quality is important. Open data quality needs to have five characteristics, Accuracy, Completeness, Reliability, Relevance, and Timeliness [7]. Accuracy indicates that the information is correct in every respect. Completeness shows how all-inclusive is the information. Reliability shows whether the information is disputed in any other dependable source. Relevance gives the necessity for it. Finally, Timeliness tells whether the information is contemporary. Data quality indicates whether the data can be utilized in requisite circumstances. When data quality is meeting these requirements then it is valuable and can be utilized in the required context.

Open data reuse is dependent on data being in the required format and in procedures that allow data to be used by any person. If open data is available in the required format so that any entrepreneur must make a minimum effort before reusing it, then it is going to be valued more. If extra input work is required to bring it to the needed form, then it will set up a barrier to open data usage. There is a need that open data portals should allow the reuse of open data, ensure the efficiency of data transmission, and enable professional initiatives based on data reuse [8].

Governments are becoming responsive to this data and are having innovative programs to cater to open data. Globally new portals have been started to cater to open data and initiatives like Data.gov and Data.gov.uk. Open government data is a form of open data. This is created by government institutions. Open data may include non-textual material like maps and even medical data. The commercial value of data has acted as a barrier to open data initiatives and licenses, charges are to be paid to reuse data. This definitely hampers the progress of the open data movement and people suggest license-free will be for the good of all concerned. Open data can improve government functioning due to transparency and less corruption. It can also bring in new tools to solve societal or real-world problems. People say that it was public money used to generate this data and restrictions on re-use are not required.

Governments have open data portals where public institutions’ datasets are available in different formats. There is a need for ensuring data quality. Different Organization units roll out this open data and there is a lack of quality standards even within different units of one organization. Effective use of open data datasets will be hampered due to poor-quality data. Different standards of open data will slow down open data initiatives as consumers will be forced to waste time and resources to improve this data so it may be re-used for actual requirements. Quality in open data is measured by characteristics like completeness, accuracy, timeliness, relevance, and reliability. Open data quality is important as users will not waste their time and resources to improve it. Ensuring quality open data at the source or portal itself can ease problems faced by end users. Quality planning, assurance, and control need to get their due importance. Information when provided in a standardized format is known as structured data. There are three types of data for analysis. Structured, semi-structured, and unstructured. Semi-structured is easier to handle than unstructured data and is not formatted in conventional ways. CSV and XML are semi-structured documents as CSV files can be easily imported into SQL for further analysis.

File formats can be of the type, JSON, XML, RDF, Comma separated files, test documents, Plain text, HTML, etc. Choosing the correct format is essential so there is the highest usability of data. Portal managers should have open data in a standardized, machine-readable format that can be readily used by customers.

This paper is in different sections. Section 2 discusses the literature survey that is about open data datasets quality. Section 4 relates to the sampling of the datasets. Section 5 details the development of a tool to measure and evaluate the usability of open data. Section 7 deals with future possibilities for open data for researchers, entrepreneurs, and other stakeholders.

Advertisement

2. Literature survey

Data-driven global economy reflects the technological breakthroughs in the past few decades. Data has become the new oil. New tools are flooding the market to exploit the data. Collection and use of data are challenging and are becoming complex with technological advances all around. Global internet traffic has increased multi-fold and data transmission is huge. It is going to evolve over the coming decades. Entrepreneurs have seen this growth and know the potential this data economy has. Digital technologies hold enormous potential, and this is just the beginning. Creators of data are now looking at data and the value associated with it. Policies and strategies are needed to get value from data-driven opportunities.

Data processing and cleanup are required to obtain the hidden value. Technology solutions have opened new opportunities which if planned well can lead to value for organizations, society, and the nation. The quality of data is also relying on context and user [9]. Free Government datasets are available on the web on different platforms in a non-proprietary format [10]. For data sources, a process of sources assessment, determination of quality grades, and then the final selection needs to be specified [11].

Users want good-quality data when they are using it. The structure is important as records must have keys that are non-empty, easy to analyze, and trustable and formats must be machine-readable. Missing values, inconsistencies within a column, superfluous information, duplicates, and bad formatting can lead to losses as extensive data cleaning operations will be required. Hence open data quality needs to be measured and made as clean as possible, so it is ready for reuse requirements by users. These factors need to be eliminated to remove damaging usability possibilities in the future. High-quality data can be a thin dividing line between success and failure. To achieve these goals, structural usability assessment is required for open data datasets. Organizations need a metric as there are innovative technologies and data processing techniques in the market, but these need good quality data to provide useful results after re-use. There are different file formats available like CSV, XLSX, ZIP, TXT, JSON, JPEG, PNG, ZIP, HTML, etc. and there is a need to be able to change from one format to another as per user requirement. Most software programs for data analysis can use different formats like XML-based files (XML), text files, comma-separated values (CSV), spreadsheets, rich text formats (RTF), and database tables.

Open data needs a metric that will be a measuring system to evaluate the quality of open data. This will establish how useful and relevant the dataset is. This will separate high-quality data from poor-quality data based on this metric. There are two established methods, which are the Global Open Data Index and Open Data Barometer. These are discussed below separately due to their usefulness and prominence. Also, there is another index, Open Data Inventory which is the first index to evaluate both the coverage and openness of national statistical systems [12]. The focus of this index is to discover gaps, promote open data guidelines, enhance access, and inspire communication among national statistical offices (NSOs) and data users. The Open Data Inventory pivots on macrodata. Survey responses and administrative records are the eventual source for most microdata, a unit record level for macrodata. The Open Data Inventory measures how thorough a country’s statistical offerings are and whether its records meet global standards of openness.

Advertisement

3. Available indexes for open data

Open data indices indicate how open a data portal is and encourage open data policies. We will discuss two open data indices. Open data barometer and global open data index. Both these evaluate the openness of an open dataset based on certain questions which vary from whether the data exists, is publicly available, whether payment is required, bulk availability, Is data machine-readable, and timeliness of data.

3.1 Global open data index

It shows the state of open government data publication. It is an independent assessment and tracks governments’ progress on open data release. The scope is narrow and confined to practical openness. The effort is not made about use or impact. Data quality, which is a significant barrier to re-use, is not covered. There are a set of questions in the survey and scoring is done based on feedback of the replies received. After results are published then feedback is obtained from public officials to improve assessment. It is a challenging effort and is focused on governments so they can improve their scores. Open Data Index uncovers a (simple) API for programmatic access to data. Currently, the API is accessible in both JSON and CSV formats. The Open Data Index has a small set of tools and patterns for enforcing visualizations. Data is made ready by the Python script. This pulls data from the Open Data Index Survey (Census) and develops it in diverse ways after which writes it to the data directory.

3.2 Open data barometer

It evaluates the occurrence and global impact of open data initiatives. It looks at governments’ usage of open data for social impact and accountability. The survey is done with questions on open data, policy, and implementation. It is a key benchmark regarding progress on openness and change policies in more than a dozen countries. It inspects world trends, and supplies country-wise data focused on open data impact and readiness.

But as shown above, limited data is gathered on the structural usability of open data datasets. Criteria used in these indices are based on functional dependencies and data summaries. This document will focus on the structural accuracy of the open data datasets.

Advertisement

4. Sampling the datasets

Here the effort is made to develop a tool and metric to be useful for the assessment of open data datasets. There are numerous government portals for open data (see Table 1). Most portals have a few thousand datasets as these are from government agencies and organizations. The US government’s data.gov portal has a few lakhs of datasets that are from the Federal government, local government, and nonfederal open data resources. The intention is to provide access to open government data, boost innovation and have a transparent government. Data.gov portal datasets were downloaded to collect samples. These were random samples of CSV data files. These datasets were analyzed further to build a score that can help in the quantitative evaluation of these datasets. A focused effort is made to find structural shortcomings in these files to help in the development of a tool.

Advertisement

5. Evaluating the usability of the datasets

In general, it is seen that data is in two forms, rectangular and non-rectangular. In rectangular, data is shaped as a rectangle with every data value analogous to some rows and columns. The sample files selected had headers and rectangular shapes. But a few files were in non-rectangular form also. Non-rectangular data are not systematically arranged. There can be inconsistent datatype in the same column or sometimes missing. Thus, the similarity of data structure is followed by a different one. Pivot data also is found as “[Commodity]”, “[Commodity]. [All Commodities]. [Foods]. [Fried Goods]”. This is useful data but becomes complex for analysts. Software tools like Power BI and Excel can be used to unpivot data. Details shown in Table 2 discuss the dataset with a star schema. Data shown in Table 3 discusses duplicates and inconsistent columns.

Customer_idCustomer nameExpenditureStateid_fkCityid_fkCountyid_fkSubcountyid_fk
101Davis1021.001257
102Fred1239.005523
103Smith1892.003589
104Jones1972.008791

Table 2.

Dataset with star schema.

Inmate #Inmate nameSexDOBRelease typeRelease date
4319087721John DylanM08/21/1991Misdemeanor02/22/2020
2319087634John DylanM08/03/1990Misdemeanor07/15/2021
4317793234Barbara HolsterF01/05/1999Supervised Release Program03/17/2020
7819087634Fred WilsonMultiple01/03/1980Misdemeanor05/19/2022
9311187634Kiran NaveenM08/01/1981Supervised Release Program09/11/2020
5312387634Jeevan CookM08/02/1985Supervised Release Program02/15/2022

Table 3.

Dataset with an identifier column, duplicates, non-standard format, and inconsistent column.

Based on the discussion above and the analysis, the dataset can be evaluated using these questions (see Table 4).

No.QuestionsYesNoWeight
1Is the dataset a rectangular table?+W1
2Column values are consistent+W2
3Follows star schema+W3
4Identifier is present+W4
5Grouping of rows+W5
6Grouping of columns+W6
7Existence of multiple dependencies among the columnsZero+W7
8Non-standard format+W8

Table 4.

Questions.

The weights are given based on importance. Also, among datasets, it was seen that variations occur due to distinct reasons like the dataset did not have a ‘rectangular’ shape, because it had a report header and many blank rows, and also the headers were not variable names but were instead variable values (see Table 5).

No.QuestionsYesNoWeight (%)
1Is the dataset a rectangular table?+20
2Column values are consistent+16
3Follows star schema+12
4Identifier is present+12
5Grouping of rows+9
6Grouping of columns+9
7Existence of multiple dependencies among the columns0+11
8Non-standard format+11

Table 5.

Weights allocation.

If a dataset has “Yes” for Question 1, it will get 0.20 points for Question 1. If it has “No” for Question 5, it will get −0.09 for Question 5. The sum of these points will be given to the utility index. Thus, for each dataset usability can be calculated. Weight can vary from the lowest 0–25% highest based on the importance each user assigns it as per his requirement. The summation of all weights is 100%. The dataset can have the highest score of 1. Samples taken from the US government’s data.gov portal were found to have these criteria. 94% of the sample datasets downloaded were rectangular whereas 90% of the datasets had inconsistent column values (see Table 6).

No.QuestionsYes (%)
1Is the dataset a rectangular table?94
2Column values are consistent90
3Follows star schema4
4Identifier is present80
5Grouping of rows15
6Grouping of columns11
7Existence of multiple dependencies among the columns32
8Non-standard format20

Table 6.

Datasets properties.

Using the utility index, scores of the sample datasets were calculated. All the sample scores for different sample datasets were aggregated and analyzed for further details (see Table 7).

Mean0.700
Standard error0.040
Median0.800
Mode0.900
Skewness−1.9

Table 7.

Summary statistics of usability scores.

Most datasets have high scores (Median = 0.800, Mean = 0.700). Also, there were a small number of low-scoring datasets (Skewness = −1.9).

Advertisement

6. Limitations of this study

The research examined only datasets from a few portals like data.gov. In the future, to have more diverse datasets then can look at samples from portals of other countries. DataPortals.org shows that there are 598 portals on open data worldwide. A more diverse sample will show the problems with datasets globally. A list can be made country-wise which will detail the data quality of each of the portals worldwide. The US and many countries view the setting up of portals as being beneficial to society, but some others may view it as revenue generation, and still, others may do it for name’s sake as will consider it too prohibitive for their country.

Details that can be gathered about these portals are whether they share good-quality data. Datasets need to be examined for compliance, consistency, completeness, and correctness. Portal provides datasets in which formats like CSV, JSON, and more. Is there an effort to repair datasets after receiving them from different organizations and before sharing them with the public? How many formats are totally provided so as to give a wider choice and should be humanly readable and machine-readable? Formats like CSV, JSON, and XML are more popular with the public and are easily shareable. The right structure and format are important to maximize the usability of data, ease of access for customers, and hidden cost of repairs of datasets for reuse.

Portals also need to look at the timeliness of datasets available on their site. Users need to make informed decisions or reuse these datasets for their services hence timeliness is important. The solution could be open data datasets that can be connected to the master database [13]. Thus, minimal update issues. Also, users will like that the portals automate the process of datasets being collected, processed, and stored as these if performed manually will be too laborious and losers will be users due to any slack in handling.

Design issues exist at many data portals. Datasets are provided as given by organizations and not as required by users. The user may be searching for a place, but the provider may not be giving a dataset in that format and hence irrelevant to the user. Data USA is a site that has looked at these design issues and improved its site. The effort has been to bring it closer to users and make data understandable to humans. It provides visualizations to help users and is principally for search engines. Through these efforts of data visualization and transforming data, users are happy, and it shows in millions of user numbers visiting the site [14]. Portal should help in understanding the structure, methodology, and arrangement of the datasets thus making life easier for users.

Based on the above details, a future list can be made of global open data portals and ranked accordingly. This could help users as they can know beforehand the portal data quality. Also, this could act as an incentive for portals to improve their data quality. Open data has gained popularity, but portals need to make it straightforward and easy for users. This global ranking of portals should be based on the structuredness of datasets, user-friendliness of the portal, formats of datasets available, and timeliness. These are some of the core features of the portals which will be beneficial to users as well as lead to a huge number of users on their site.

6.1 Future research directions

There is a need for automated tools for classifying and assessing the datasets available at the portals. Users usually take time to find and query the required datasets. Portals need to be looked at from factors such as speed, effectiveness, and satisfaction. Speed will be how quickly users can analyze the datasets. Effectiveness is whether users achieved the goals and satisfaction is about datasets fulfilling the requirements of users. Obtaining quality data is difficult and organizations are presently using tools like Talend Open Profiler, Apache Griffin, and power matchmaker. I have shown these names so it should be known that there are cleaning tools available in the market that can help users. These can be used for data validation, processing, and assessment. There is still a need for automated tools for open data so that users can spend less time searching for datasets and spend less on the hidden costs of repairing datasets.

Advertisement

7. Conclusion

This study is directed toward the perspective on the usability of open data. Citizens, entrepreneurs, and stakeholders are at the heart of this open data initiative. Designing an open data output is not enough but it needs to satisfy the needs of its end users. These include the structuredness of the open data dataset. Dataset use for decision-making, and reuse will be more popular if there is support for different formats, interoperability, and minimum costs in cleansing efforts to end users. Portals, public sector agencies, and other open data dataset producers need to ensure that values under columns have the same data type and minimal missing values. Users are wanting to reuse data and hence portals should look at improvements in a form so to perform visualizations and analytics is easier.

The effort in this chapter has been to shed more light on the structuredness of open data datasets and how measuring it will give the users a choice of choosing the dataset of any portal based on the requirement. The utility index is a good starting point to get details of the structuredness of open data datasets.

Advertisement

Appendix: definitions of terms

CSV (comma separated values) file

File is a text file that has a specific format that allows information to be saved in a table-based format.

Data

Data includes lists, tables, graphs, charts, and images. Data may be structured or unstructured and organized.

Data cleaning or scrubbing

Data cleansing, additionally known as data cleaning or data scrubbing, is the action of fixing incorrect, incomplete, duplicate or otherwise erroneous data in a data set.

Data portal

A portal is a web-based platform that collects information from distinct sources into a single user interface and presents users with the most applicable information for their context.

Database

Can be a software system for processing and managing data.

Dataset

A dataset is any organized collection of data.

File format

The file format refers to the internal arrangement (format) of the file, not how it is displayed to users. For example, CSV and XLS files are structured very differently, but may look similar or identical when opened in a spreadsheet program. The format corresponds to the last part of the file name or extension.

Machine readable

Able to be understood and used by a computer. To be machine readable, data must be structured in an organized way. CSV, JSON, and XML among others, are formats that contain structured data that a computer can automatically read and process.

Metadata

Metadata is information about a dataset that makes the data easier to find or identify.

Open data

Data is open if it can be freely accessed, used, modified and shared by anyone for any purpose.

PDF (portable document format)

PDF is a multi-platform file format.

Structured data

Structured data refers to information with a high degree of standardization, clearly defined and making the data readily searchable by search engines.

Unstructured data

Unstructured data (or unstructured information) is information that is usually stored in its native format. Either does not have a pre-defined data model or is not organized in a pre-defined manner, such as a flat file.

XML

Extensible Markup Language is a flexible file format designed to store, transport and share data over the Internet.

References

  1. 1. 10 Years of Open Data. Opendatasoft. 2017. Available from: https://www.opendatasoft.com/en/blog/open-data-anniversary-ten-years-after-the-sebastopol-meeting/
  2. 2. Dickinson A. Whats the difference between open data and open government data. Medium. 2016. Available from: https://medium.com/@digidickinson/whats-the-difference-between-open-data-and-open-government-data-8a28eb525d2a
  3. 3. Alzamil ZS, Vasarhelyi MA. A new model for effective and efficient open government data. International Journal of Disclosure and Governance. 2019;16(4):174-187. DOI: 10.1057/s41310-019-00066-w
  4. 4. Bargh MS, Choenni S, Meijer R. ICEGOV '15-16: Proceedings of the 9th International Conference on Theory and Practice of Electronic Governance. New York, United States: Association for Computing Machinery; 2016. pp. 199-206. DOI: 10.1145/2910019.2910037
  5. 5. Corrêa AS, Paula E Cd, Corrêa P, Pizzigatti L, Silva FS Cd. Transparency and open government data. Transforming Government: People, Process and Policy. 2017;11(1):58-78. DOI: 10.1108/TG-12-2015-0052
  6. 6. Chokki AP, Simonofski A, Frénay B, Vanderose B. Open government data awareness: Eliciting citizens’ requirements for application design. Transforming Government: People, Process and Policy. 2022;16(4):377-390. DOI: 10.1108/TG-04-2022-0057
  7. 7. Sarfin RL. 5 Characteristics of Data Quality. Precisely; 2022. Available from: https://www.precisely.com/blog/data-quality/5-characteristics-of-data-quality#:∼:text=There%20are%20five%20traits%20that,read%20on%20to%20learn%20more
  8. 8. Abella A, Ortiz-de-Urbina-Criado M, De-Pablos-Heredero C. Criteria for the identification of ineffective open data portals: Pretender open data portals. El Profesional De La Información. 2022;31(10). DOI: 10.3145/epi.2022.ene.11
  9. 9. Martin EG, Law J, Ran W, Helbig N, Birkhead GS. Evaluating the quality and usability of open data for public health research: A systematic review of data offerings on 3 open data platforms. Journal of Public Health Management and Practice. 2017;23(4):e5-e13. DOI: 10.1097/PHH.0000000000000388
  10. 10. D'Agostino M, Samuel N, Sarol M, de Cosio F, Marti M, Luo T, et al. Open data and public health. Revista Panamericana de Salud Pública. 2018;42. DOI: 10.26633/RPSP.2018.66
  11. 11. Stróżyna M, Eiden G, Abramowicz W, Filipiak D, Małyszko J, Węcel K. A framework for the quality-based selection and retrieval of open data—A use case from the maritime domain. Electronic Markets. 2018;28(2):219-233
  12. 12. Assessing the Coverage and Openness of Official Statistics. Open data Inventory. Open Data Watch. n.d. Available from: http://opendatawatch.com/monitoring-reporting/open-data-inventory/
  13. 13. Schrack A. Guide to creating, using, and maintaining open data portals. Safe Software. 2021. Available from: https://engage.safe.com/blog/2021/04/guide-creating-using-maintaining-open-data-portals/
  14. 14. Cesar A. What’s wrong with open-data sites--and how we can fix them. Scientific American. 2016. Available from: https://blogs.scientificamerican.com/guest-blog/what-s-wrong-with-open-data-sites-and-how-we-can-fix-them/

Written By

Dharmender Salian

Submitted: 17 July 2023 Reviewed: 27 July 2023 Published: 10 November 2023