官术网_书友最值得收藏!

  • Big Data Analysis with Python
  • Ivan Marin Ankit Shukla Sarang VK
  • 456字
  • 2021-06-11 13:46:38

Exporting Data from Pandas

After creating an intermediate or final dataset in pandas, we can export the values from the DataFrame to several other formats. The most common one is CSV, and the command to do so is df.to_csv('filename.csv'). Other formats, such as Parquet and JSON, are also supported.

Note

Parquet is particularly interesting, and it is one of the big data formats that we will discuss later in the book.

Exercise 7: Exporting Data in Different Formats

After finishing our analysis, we may want to save our transformed dataset with all the corrections, so if we want to share this dataset or redo our analysis, we don't have to transform the dataset again. We can also include our analysis as part of a larger data pipeline or even use the prepared data in the analysis as input to a machine learning algorithm. We can accomplish data exporting our DataFrame to a file with the right format:

  1. Import all the required libraries and read the data from the dataset using the following command:

    import numpy as np

    import pandas as pd

    url = "https://opendata.socrata.com/api/views/cf4r-dfwe/rows.csv?accessType=DOWNLOAD"

    df = pd.read_csv(url)

    Redo all adjustments for the data types (date, numeric, and categorical) in the RadNet data. The type should be the same as in Exercise 6: Aggregation and Grouping Data.

  2. Select the numeric columns and the categorical columns, creating a list for each of them:

    columns = df.columns

    id_cols = ['State', 'Location', "Date Posted", 'Date Collected', 'Sample Type', 'Unit']

    columns = list(set(columns) - set(id_cols))

    columns

    The output is as follows:

    Figure 1.16: List of columns

  3. Apply the lambda function that replaces Non-detect with np.nan:

    df['Cs-134'] = df['Cs-134'].apply(lambda x: np.nan if x == "Non-detect" else x)

    df.loc[:, columns] = df.loc[:, columns].applymap(lambda x: np.nan if x == 'Non-detect' else x)

    df.loc[:, columns] = df.loc[:, columns].applymap(lambda x: np.nan if x == 'ND' else x)

  4. Remove the spaces from the categorical columns:

    df.loc[:, ['State', 'Location', 'Sample Type', 'Unit']] = df.loc[:, ['State', 'Location', 'Sample Type', 'Unit']].applymap(lambda x: x.strip())

  5. Transform the date columns to the datetime format:

    df['Date Posted'] = pd.to_datetime(df['Date Posted'])

    df['Date Collected'] = pd.to_datetime(df['Date Collected'])

  6. Transform all numeric columns to the correct numeric format with the to_numeric method:

    for col in columns:

    df[col] = pd.to_numeric(df[col])

  7. Transform all categorical variables to the category type:

    df['State'] = df['State'].astype('category')

    df['Location'] = df['Location'].astype('category')

    df['Unit'] = df['Unit'].astype('category')

    df['Sample Type'] = df['Sample Type'].astype('category')

  8. Export our transformed DataFrame, with the right values and columns, to the CSV format with the to_csv function. Exclude the index using index=False, use a semicolon as the separator sep=";", and encode the data as UTF-8 encoding="utf-8":

    df.to_csv('radiation_clean.csv', index=False, sep=';', encoding='utf-8')

  9. Export the same DataFrame to the Parquet columnar and binary format with the to_parquet method:

    df.to_parquet('radiation_clean.prq', index=False)

    Note

    Be careful when converting a datetime to a string!

主站蜘蛛池模板: 衡阳县| 绵竹市| 宁蒗| 孟津县| 九龙城区| 临沭县| 泸定县| 广宁县| 囊谦县| 玉树县| 化德县| 井冈山市| 江安县| 新野县| 海兴县| 孟州市| 五原县| 陇西县| 哈密市| 连州市| 固阳县| 鱼台县| 长春市| 泗洪县| 五寨县| 绥化市| 云安县| 临漳县| 龙胜| 荃湾区| 中江县| 毕节市| 湘阴县| 建宁县| 门源| 吴川市| 新邵县| 宣威市| 肇东市| 安乡县| 繁峙县|