Skip to content

Commit 626edc7

Browse files
shaunagmJason94austinweisgrauIanRFergusoncodygordon
authored
Merge major-release into main & bump version number in setup.py (#938)
* Merge main into major-release (#814) * Use black formatting in addition to flake8 (#796) * Run black formatter on entire repository * Update requirements.txt and CONTRIBUTING.md to reflect black format * Use black linting in circleci test job * Use longer variable name to resolve flake8 E741 * Move noqa comments back to proper lines after black reformat * Standardize S3 Prefix Conventions (#803) This PR catches exception errors when a user does not exhaustive access to keys in an S3 bucket * Add Default Parameter Flexibility (#807) Skips over new `/` logic checks if prefix is `None` (which is true by default) * MoveOn Shopify / AK changes (#801) * Add access_token authentication option for Shopify * Remove unnecessary check The access token will either be None or explicitly set; don't worry about an empty string. * Add get_orders function and test * Add get_transactions function and test * Add function and test to get order * style fixes * style fixes --------- Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> * Catch File Extensions in S3 Prefix (#809) * add exception handling * Shortened logs for flake8 * add logic for default case * added file logic + note to user * restructured prefix logic This change moves the prefix -> prefix/ logic into a try/except block ... this will be more robust to most use cases, while adding flexibility that we desire for split-permission buckets * drop nested try/catch + add verbose error log * Add error message verbosity Co-authored-by: willyraedy <[email protected]> --------- Co-authored-by: willyraedy <[email protected]> --------- Co-authored-by: Austin Weisgrau <[email protected]> Co-authored-by: Ian <[email protected]> Co-authored-by: Cody Gordon <[email protected]> Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> Co-authored-by: willyraedy <[email protected]> * DatabaseConnector Interface to Major Release (#815) * Create the DatabaseConnector * Implement DatabaseConnector for the DB connectors * Add DatabaseConnector to std imports * Flake8 fix * Remove reference to padding in copy() * Add database_discover and fix inheritance * Remove strict_length from copy() * Put strict_length back in original order * Remove strict_length stub from BQ * Fix discover_database export statement * Add return annotation to mysql table_exists * Black formatter pass * Add more documentation on when you should use * Add developer notes. * Fix code block documentation * Enhance discover database * Add unit tests for discover database * Fix unit tests * Add two more tests * Reverse Postgres string_length change --------- Co-authored-by: Jason Walker <[email protected]> * Zoom Authentication + Polling API (#873) * Add multiple python versions to CI tests (#858) * Add multiple python versions to CI tests * Remove duplicate key * Combine CI jobs * Update ubuntu image and actually install Python versions * Replace pyenv with apt-get to install python versions * Remove sudo * Remove get from 'apt-get' * Update apt before attempting to install * Add ppa/deadsnakes repository * Add prereq * Fix typo * Add -y to install command * Move -y to correct spot * Add more -ys * Add some echoes to debug * Switch back to pyenv approach * Remove tests from circleci config and move to new github actions config Note: no caching yet, this is more of a proof of concept * Split out Mac tests into seaparate file * Set testing environmental variable separately * First attempt to add depdendency cache * Remove windows tests for now * Fix circleci config * Fix circleci for real this time * Add tests on merging of PRs and update readme to show we do not support for Python 3.7 * Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861) * Enable passing `identifiers` to ActionNetwork upsert_person * Remove unused arguments from method self.get_page method doesn't exist and that method call doesn't return anything. The return statement works fine as-is to return all tags and handles pagination on its own. * Include deprecated per_page argument for backwards compatibility Emit a deprecation warning if this argument is used * Include examples in docstring for `identifiers` argument * Expand documentation on ActionNetwork identifiers * Add pre-commit hook config to run flake8 and black on commit (#864) Notes added to README on how to install and set up * black format * black format * jwt -> s2s oauth * scaffold new functions * add docs * return * add type handling * pass in updated params * move access token function * ok let's rock!! * make changes * pass access token key only * use temporary client to gen token * mock request in constructor * drop unused imports * add changes * scaffolding tests * Add multiple python versions to CI tests (#858) * Add multiple python versions to CI tests * Remove duplicate key * Combine CI jobs * Update ubuntu image and actually install Python versions * Replace pyenv with apt-get to install python versions * Remove sudo * Remove get from 'apt-get' * Update apt before attempting to install * Add ppa/deadsnakes repository * Add prereq * Fix typo * Add -y to install command * Move -y to correct spot * Add more -ys * Add some echoes to debug * Switch back to pyenv approach * Remove tests from circleci config and move to new github actions config Note: no caching yet, this is more of a proof of concept * Split out Mac tests into seaparate file * Set testing environmental variable separately * First attempt to add depdendency cache * Remove windows tests for now * Fix circleci config * Fix circleci for real this time * Add tests on merging of PRs and update readme to show we do not support for Python 3.7 * Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861) * Enable passing `identifiers` to ActionNetwork upsert_person * Remove unused arguments from method self.get_page method doesn't exist and that method call doesn't return anything. The return statement works fine as-is to return all tags and handles pagination on its own. * Include deprecated per_page argument for backwards compatibility Emit a deprecation warning if this argument is used * Include examples in docstring for `identifiers` argument * Expand documentation on ActionNetwork identifiers * Add pre-commit hook config to run flake8 and black on commit (#864) Notes added to README on how to install and set up * black format * black format * jwt -> s2s oauth * scaffold new functions * add docs * return * add type handling * pass in updated params * move access token function * ok let's rock!! * make changes * pass access token key only * use temporary client to gen token * mock request in constructor * drop unused imports * add changes * scaffolding tests * write unit tests * drop poll endpoints for now --------- Co-authored-by: Shauna <[email protected]> Co-authored-by: Austin Weisgrau <[email protected]> * Merging Main Before Release (#880) * Add multiple python versions to CI tests (#858) * Add multiple python versions to CI tests * Remove duplicate key * Combine CI jobs * Update ubuntu image and actually install Python versions * Replace pyenv with apt-get to install python versions * Remove sudo * Remove get from 'apt-get' * Update apt before attempting to install * Add ppa/deadsnakes repository * Add prereq * Fix typo * Add -y to install command * Move -y to correct spot * Add more -ys * Add some echoes to debug * Switch back to pyenv approach * Remove tests from circleci config and move to new github actions config Note: no caching yet, this is more of a proof of concept * Split out Mac tests into seaparate file * Set testing environmental variable separately * First attempt to add depdendency cache * Remove windows tests for now * Fix circleci config * Fix circleci for real this time * Add tests on merging of PRs and update readme to show we do not support for Python 3.7 * Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861) * Enable passing `identifiers` to ActionNetwork upsert_person * Remove unused arguments from method self.get_page method doesn't exist and that method call doesn't return anything. The return statement works fine as-is to return all tags and handles pagination on its own. * Include deprecated per_page argument for backwards compatibility Emit a deprecation warning if this argument is used * Include examples in docstring for `identifiers` argument * Expand documentation on ActionNetwork identifiers * Add pre-commit hook config to run flake8 and black on commit (#864) Notes added to README on how to install and set up * Add Events Helpers to PDI Connector (#865) * add helpers to Events object * stage docstring * add docs * linting * fix typo + enforce validation * add return docs * add events tests * use mock pdi * jk * mark live tests * add alias * drop unused imports * change release number (#872) * add release notes yml (#878) --------- Co-authored-by: Shauna <[email protected]> Co-authored-by: Austin Weisgrau <[email protected]> Co-authored-by: sharinetmc <[email protected]> * Switch from API key to Personal Access Token (#866) * Wraedy/bigquery db connector (#875) * Create the DatabaseConnector * Implement DatabaseConnector for the DB connectors * Add DatabaseConnector to std imports * Flake8 fix * Remove reference to padding in copy() * Add database_discover and fix inheritance * Remove strict_length from copy() * Put strict_length back in original order * Remove strict_length stub from BQ * Fix discover_database export statement * Add return annotation to mysql table_exists * Black formatter pass * create bigquery folder in databases folde * create query parity between bigquery and redshift * mock up copy functionality for bigquery * fix typo * add duplicate function to bigquery * move transaction to helper function * implement upsert * fix imports and packages * add get tables and views methods * add query return flexibility * match bigquery apis with redshift * make s3 to gcs more generic * add transaction support to bigquery * remove logs * add gcs docs * process job config in function * finish todo's (and add one more lol) * [ wip ] AttributeError * add raw download param * drop raw download * copy from GCS docstring * copy s3 docs * copy docs * docstrings * control flow * add source path to aws transfer spec * add Code object to imports * cleaning up slightly * check status code * nice * pass in required param * add pattern handling * add quote character to LoadJobConfig * add schema to copy from gcs * drop dist and sortkeys No longer input params * add delimiter param * use schema definition * write column mapping helper * pass in formatted schema to load_uri fn * rename new file * move file with jason's changes * move new changes back into file to maintain history * remove extraneous fn and move project job config * get back to test parity * fix bad merge conflict * remove extra params from copy sig * clarify transaction guidance * clean up list blobs * clean up storage transfer polling * upgrade cloud storage package * use list of schema mappings * scaffolded big file function 😎 * add to docs * default to compression we can make this more flexible, just scaffolding * add temp logging we can drop this later just trying to get a handle on cycle time * use decompress * add logging * implement unzipping and reuploading cloud file * logging error * Add destination path * Small fix * add todo's * drop max wait time * add kwargs to put blob Potentially useful for metadata (content type, etc.) * add verbosity to description * black formatted * add gcs to/from helpers * write to_bigquery function * update big file logic * allow jagged rows logic * test additional methods * add duplicate table test * test drop flag for duplicate * basic test for upsert * add typing * move non-essential logs to debug * move logs to debug * hey, it works! * add UUID support for bigquery type map * add datetime to bigquery type map * address comments * address comments * drop GCS class function we can pick this up later but it doesn't currently work * move class back to old location with new import * revert to old name * remove transaction error handler * add description conditional block for s3 * change one more conditional to s3 * handle empty source paths * reverting new import path --------- Co-authored-by: Jason Walker <[email protected]> Co-authored-by: Ian <[email protected]> Co-authored-by: Kasia Hinkson <[email protected]> * BigQuery - Add Column Helpers (#911) * add column outlines * optionally log query * flip default params * flip back * Google BigQuery - Clean Up Autodetect Logic (#914) * don't delete * clean up schema autodetect logic * undo comments * Update stale references to parsons.databases.bigquery (#920) * Fix BQ references in discover_database * Update BQ references in tofrom.py * Update BQ refs in test_discover_database.py * Fix gcs hidden error (#930) * logging * edit flake8 max line for testing * change flake8 for testing * comment out unsused var * add print to check branch * change to logging * more logging * try printing * more logging * logging: * more printing * more logging * print transfer job request * change error message * requested changes * remove comment * GoogleCloudStorage - Handle zip / gzip files flexibly (#937) * Update release (#894) * Zoom Polls (#886) * Merge main into major-release (#814) * Use black formatting in addition to flake8 (#796) * Run black formatter on entire repository * Update requirements.txt and CONTRIBUTING.md to reflect black format * Use black linting in circleci test job * Use longer variable name to resolve flake8 E741 * Move noqa comments back to proper lines after black reformat * Standardize S3 Prefix Conventions (#803) This PR catches exception errors when a user does not exhaustive access to keys in an S3 bucket * Add Default Parameter Flexibility (#807) Skips over new `/` logic checks if prefix is `None` (which is true by default) * MoveOn Shopify / AK changes (#801) * Add access_token authentication option for Shopify * Remove unnecessary check The access token will either be None or explicitly set; don't worry about an empty string. * Add get_orders function and test * Add get_transactions function and test * Add function and test to get order * style fixes * style fixes --------- Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> * Catch File Extensions in S3 Prefix (#809) * add exception handling * Shortened logs for flake8 * add logic for default case * added file logic + note to user * restructured prefix logic This change moves the prefix -> prefix/ logic into a try/except block ... this will be more robust to most use cases, while adding flexibility that we desire for split-permission buckets * drop nested try/catch + add verbose error log * Add error message verbosity Co-authored-by: willyraedy <[email protected]> --------- Co-authored-by: willyraedy <[email protected]> --------- Co-authored-by: Austin Weisgrau <[email protected]> Co-authored-by: Ian <[email protected]> Co-authored-by: Cody Gordon <[email protected]> Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> Co-authored-by: willyraedy <[email protected]> * black format * black format * jwt -> s2s oauth * scaffold new functions * add docs * return * DatabaseConnector Interface to Major Release (#815) * Create the DatabaseConnector * Implement DatabaseConnector for the DB connectors * Add DatabaseConnector to std imports * Flake8 fix * Remove reference to padding in copy() * Add database_discover and fix inheritance * Remove strict_length from copy() * Put strict_length back in original order * Remove strict_length stub from BQ * Fix discover_database export statement * Add return annotation to mysql table_exists * Black formatter pass * Add more documentation on when you should use * Add developer notes. * Fix code block documentation * Enhance discover database * Add unit tests for discover database * Fix unit tests * Add two more tests * Reverse Postgres string_length change --------- Co-authored-by: Jason Walker <[email protected]> * add type handling * pass in updated params * move access token function * ok let's rock!! * make changes * pass access token key only * use temporary client to gen token * mock request in constructor * drop unused imports * add changes * scaffolding tests * Add multiple python versions to CI tests (#858) * Add multiple python versions to CI tests * Remove duplicate key * Combine CI jobs * Update ubuntu image and actually install Python versions * Replace pyenv with apt-get to install python versions * Remove sudo * Remove get from 'apt-get' * Update apt before attempting to install * Add ppa/deadsnakes repository * Add prereq * Fix typo * Add -y to install command * Move -y to correct spot * Add more -ys * Add some echoes to debug * Switch back to pyenv approach * Remove tests from circleci config and move to new github actions config Note: no caching yet, this is more of a proof of concept * Split out Mac tests into seaparate file * Set testing environmental variable separately * First attempt to add depdendency cache * Remove windows tests for now * Fix circleci config * Fix circleci for real this time * Add tests on merging of PRs and update readme to show we do not support for Python 3.7 * Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861) * Enable passing `identifiers` to ActionNetwork upsert_person * Remove unused arguments from method self.get_page method doesn't exist and that method call doesn't return anything. The return statement works fine as-is to return all tags and handles pagination on its own. * Include deprecated per_page argument for backwards compatibility Emit a deprecation warning if this argument is used * Include examples in docstring for `identifiers` argument * Expand documentation on ActionNetwork identifiers * Add pre-commit hook config to run flake8 and black on commit (#864) Notes added to README on how to install and set up * black format * black format * jwt -> s2s oauth * scaffold new functions * add docs * return * add type handling * pass in updated params * move access token function * ok let's rock!! * make changes * pass access token key only * use temporary client to gen token * mock request in constructor * drop unused imports * add changes * scaffolding tests * write unit tests * added testing * drop typing (for now) * update docstring typing * add tests * write functions * update typing * add poll results * update table output * fix tests * uhhh run it back * add scope requirements * add to docs We can add more here if folks see fit * one for the money two for the show --------- Co-authored-by: Jason <[email protected]> Co-authored-by: Austin Weisgrau <[email protected]> Co-authored-by: Cody Gordon <[email protected]> Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> Co-authored-by: willyraedy <[email protected]> Co-authored-by: Jason Walker <[email protected]> Co-authored-by: Shauna <[email protected]> * Check for empty tables in zoom poll results (#897) Co-authored-by: Jason Walker <[email protected]> * Bump urllib3 from 1.26.5 to 1.26.17 (#901) Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.5 to 1.26.17. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](urllib3/urllib3@1.26.5...1.26.17) --- updated-dependencies: - dependency-name: urllib3 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Add MobileCommons Connector (#896) * mobilecommons class * Update __init__.py * get broadcasts * fix get broadcast request * Add mc_get_request method * Add annotation * Incorporate Daniel's suggestions and finish up get_broadcasts * A few more methods Need to figure out page_count issue * small fix * Remove page_count, use page record num instead * Add in page_count again Not all get responses include num param, but do include page_count. wft * Fix logging numbers * Add create_profile * Fix error message for post request * Start tests * Add some tests * Continue testing * Update test_mobilecommons.py * functionalize status_code check * break out parse_get_request function * fix test data * fix documentation typo * Add several tests * Update mobilecommons.py * Fix limit and pagination logic * debug unit testing * better commenting and logic * Documentation * Add MC to init file * Revert "Merge branch 'main' into cormac-mobilecommons-connector" This reverts commit cad250f, reversing changes made to 493e117. * Revert "Add MC to init file" This reverts commit 493e117. * Revert "Revert "Add MC to init file"" This reverts commit 8f87ec2. * Revert "Revert "Merge branch 'main' into cormac-mobilecommons-connector"" This reverts commit 8190052. * Fix init destruction * fix init yet again * Update testing docs with underscores * Lint * Lint tests * break up long responses * Fix more linting issues * Hopefully last linting issue * DGJKSNCHIVBN * Documentation fixes * Remove note to self * date format * remove random notes * Update test_mobilecommons.py --------- Co-authored-by: sharinetmc <[email protected]> * #741 : Deprecate Slack chat.postMessage `as_user` argument and allow for new authorship arguments (#891) * remove the argument and add a warning that the usage is deprecated * remove usage of as_user from sample code * add in the user customization arguments in lieu of the deprecated as_user argument * add comment regarding the permissions required to use these arguments * use kwargs * surface the whole response * allow usage of the deprecated argument but surface the failed response better * add to retry * delete test file * fix linting * formatting to fix tests * fix if style * add warning for using thread_ts * move the documentation to the optional arguments * #816 Airtable.get_records() fields argument can be either str or list (#892) * all fields to be a str object * remove newline * Nir's actionnetwork changes (#900) * working on adding a functio to an and took care of a lint issues * init * working on all get functions * actionnetwork functions batch 1 is ready * linting and black formatted compliance * removed unwanted/unsused lines * merged updated main * did some linting * added some more get functions to support all ActionNetwork objects (Advocacy Campaigns, Attendances, Campaigns, Custom Fields, Donations, Embeds, Event Campaigns, Events, Forms, Fundraising Pages, Items, Lists, Messages, Metadata, Outreaches, People, Petitions, Queries, Signatures, Submissions, Tags, Taggings, Wrappers) * worked on linting again * fix airtable.insert_records table arg (#907) * Add canales s3 functions (#885) * add raw s3 functions to parsons * add selected functions to s3.py * delte redundant functions and move drop_and_save function to redshift.py * create test file * add s3 unit tests * add rs.drop_and_unload unit test * add printing for debugging * remove testing file * unsaved changes * remove unused packages * remove unneeded module * Bump urllib3 from 1.26.17 to 1.26.18 (#904) Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](urllib3/urllib3@1.26.17...1.26.18) --- updated-dependencies: - dependency-name: urllib3 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: sharinetmc <[email protected]> * New connector for working with the Catalist Match API (#912) * Enable api_connector to return error message in `text` attribute Some API error responses contain the error message in the `text` attribute, so this update makes it possible to fetch that message if it exists. * New connector to work with the Catalist Match API * Add pytest-mock to requirements to support mocking in pytests * Tests on the catalist match connector * More open ended pytest-mock version for compatibility * Expand docstring documetation based on feedback in PR * More verbose error on match failure * Parameterize template_id variable * Expand docstrings on initial setup * Include Catalist documentation rst file * Enhancement: Action Network Connector: Added unpack_statistics param in get_messages method (#917) * Adds parameter to get_messages This adds the ability to unpack the statistics which are returned as a nested dictionary in the response. * added unpack_statistics to an.get_messages() * added parameters to get_messages and built tests * changes unpack_statistics to False by default. * added tbl variable * formatted with black * fixed docs --------- Co-authored-by: mattkrausse <[email protected]> * Adding rename_columns method to Parsons Table (#923) * added rename_columns for multiple cols * linted * added clarification to docs about dict structure * updated docs --------- Co-authored-by: mattkrausse <[email protected]> * Add http response to update_mailer (#924) Without returning the response, or at least the status code, it's impossible to check for errors. * Enable passing arbitrary additional fields to NGPVAN person match API (#916) * match gcs api to s3 * wip * two different functions * use csv as default * drop unused var * add docs * use temp file * add comments * wip * add docs + replicate in gzip * boy howdy! * set timeout * Revert "Merge branch 'main' into ianferguson/gcs-pathing" This reverts commit 5b1ef6e, reversing changes made to f0eb3d6. * black format --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Kasia Hinkson <[email protected]> Co-authored-by: Jason <[email protected]> Co-authored-by: Austin Weisgrau <[email protected]> Co-authored-by: Cody Gordon <[email protected]> Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> Co-authored-by: willyraedy <[email protected]> Co-authored-by: Jason Walker <[email protected]> Co-authored-by: Shauna <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Cormac Martinez del Rio <[email protected]> Co-authored-by: sharinetmc <[email protected]> Co-authored-by: Angela Gloyna <[email protected]> Co-authored-by: NirTatcher <[email protected]> Co-authored-by: justicehaze <[email protected]> Co-authored-by: mattkrausse <[email protected]> Co-authored-by: mkrausse-ggtx <[email protected]> Co-authored-by: Sophie Waldman <[email protected]> * GoogleCloudStorage - Add GCS Destination Path Param (#936) * Update release (#894) * Zoom Polls (#886) * Merge main into major-release (#814) * Use black formatting in addition to flake8 (#796) * Run black formatter on entire repository * Update requirements.txt and CONTRIBUTING.md to reflect black format * Use black linting in circleci test job * Use longer variable name to resolve flake8 E741 * Move noqa comments back to proper lines after black reformat * Standardize S3 Prefix Conventions (#803) This PR catches exception errors when a user does not exhaustive access to keys in an S3 bucket * Add Default Parameter Flexibility (#807) Skips over new `/` logic checks if prefix is `None` (which is true by default) * MoveOn Shopify / AK changes (#801) * Add access_token authentication option for Shopify * Remove unnecessary check The access token will either be None or explicitly set; don't worry about an empty string. * Add get_orders function and test * Add get_transactions function and test * Add function and test to get order * style fixes * style fixes --------- Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> * Catch File Extensions in S3 Prefix (#809) * add exception handling * Shortened logs for flake8 * add logic for default case * added file logic + note to user * restructured prefix logic This change moves the prefix -> prefix/ logic into a try/except block ... this will be more robust to most use cases, while adding flexibility that we desire for split-permission buckets * drop nested try/catch + add verbose error log * Add error message verbosity Co-authored-by: willyraedy <[email protected]> --------- Co-authored-by: willyraedy <[email protected]> --------- Co-authored-by: Austin Weisgrau <[email protected]> Co-authored-by: Ian <[email protected]> Co-authored-by: Cody Gordon <[email protected]> Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> Co-authored-by: willyraedy <[email protected]> * black format * black format * jwt -> s2s oauth * scaffold new functions * add docs * return * DatabaseConnector Interface to Major Release (#815) * Create the DatabaseConnector * Implement DatabaseConnector for the DB connectors * Add DatabaseConnector to std imports * Flake8 fix * Remove reference to padding in copy() * Add database_discover and fix inheritance * Remove strict_length from copy() * Put strict_length back in original order * Remove strict_length stub from BQ * Fix discover_database export statement * Add return annotation to mysql table_exists * Black formatter pass * Add more documentation on when you should use * Add developer notes. * Fix code block documentation * Enhance discover database * Add unit tests for discover database * Fix unit tests * Add two more tests * Reverse Postgres string_length change --------- Co-authored-by: Jason Walker <[email protected]> * add type handling * pass in updated params * move access token function * ok let's rock!! * make changes * pass access token key only * use temporary client to gen token * mock request in constructor * drop unused imports * add changes * scaffolding tests * Add multiple python versions to CI tests (#858) * Add multiple python versions to CI tests * Remove duplicate key * Combine CI jobs * Update ubuntu image and actually install Python versions * Replace pyenv with apt-get to install python versions * Remove sudo * Remove get from 'apt-get' * Update apt before attempting to install * Add ppa/deadsnakes repository * Add prereq * Fix typo * Add -y to install command * Move -y to correct spot * Add more -ys * Add some echoes to debug * Switch back to pyenv approach * Remove tests from circleci config and move to new github actions config Note: no caching yet, this is more of a proof of concept * Split out Mac tests into seaparate file * Set testing environmental variable separately * First attempt to add depdendency cache * Remove windows tests for now * Fix circleci config * Fix circleci for real this time * Add tests on merging of PRs and update readme to show we do not support for Python 3.7 * Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861) * Enable passing `identifiers` to ActionNetwork upsert_person * Remove unused arguments from method self.get_page method doesn't exist and that method call doesn't return anything. The return statement works fine as-is to return all tags and handles pagination on its own. * Include deprecated per_page argument for backwards compatibility Emit a deprecation warning if this argument is used * Include examples in docstring for `identifiers` argument * Expand documentation on ActionNetwork identifiers * Add pre-commit hook config to run flake8 and black on commit (#864) Notes added to README on how to install and set up * black format * black format * jwt -> s2s oauth * scaffold new functions * add docs * return * add type handling * pass in updated params * move access token function * ok let's rock!! * make changes * pass access token key only * use temporary client to gen token * mock request in constructor * drop unused imports * add changes * scaffolding tests * write unit tests * added testing * drop typing (for now) * update docstring typing * add tests * write functions * update typing * add poll results * update table output * fix tests * uhhh run it back * add scope requirements * add to docs We can add more here if folks see fit * one for the money two for the show --------- Co-authored-by: Jason <[email protected]> Co-authored-by: Austin Weisgrau <[email protected]> Co-authored-by: Cody Gordon <[email protected]> Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> Co-authored-by: willyraedy <[email protected]> Co-authored-by: Jason Walker <[email protected]> Co-authored-by: Shauna <[email protected]> * Check for empty tables in zoom poll results (#897) Co-authored-by: Jason Walker <[email protected]> * Bump urllib3 from 1.26.5 to 1.26.17 (#901) Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.5 to 1.26.17. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](urllib3/urllib3@1.26.5...1.26.17) --- updated-dependencies: - dependency-name: urllib3 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Add MobileCommons Connector (#896) * mobilecommons class * Update __init__.py * get broadcasts * fix get broadcast request * Add mc_get_request method * Add annotation * Incorporate Daniel's suggestions and finish up get_broadcasts * A few more methods Need to figure out page_count issue * small fix * Remove page_count, use page record num instead * Add in page_count again Not all get responses include num param, but do include page_count. wft * Fix logging numbers * Add create_profile * Fix error message for post request * Start tests * Add some tests * Continue testing * Update test_mobilecommons.py * functionalize status_code check * break out parse_get_request function * fix test data * fix documentation typo * Add several tests * Update mobilecommons.py * Fix limit and pagination logic * debug unit testing * better commenting and logic * Documentation * Add MC to init file * Revert "Merge branch 'main' into cormac-mobilecommons-connector" This reverts commit cad250f, reversing changes made to 493e117. * Revert "Add MC to init file" This reverts commit 493e117. * Revert "Revert "Add MC to init file"" This reverts commit 8f87ec2. * Revert "Revert "Merge branch 'main' into cormac-mobilecommons-connector"" This reverts commit 8190052. * Fix init destruction * fix init yet again * Update testing docs with underscores * Lint * Lint tests * break up long responses * Fix more linting issues * Hopefully last linting issue * DGJKSNCHIVBN * Documentation fixes * Remove note to self * date format * remove random notes * Update test_mobilecommons.py --------- Co-authored-by: sharinetmc <[email protected]> * #741 : Deprecate Slack chat.postMessage `as_user` argument and allow for new authorship arguments (#891) * remove the argument and add a warning that the usage is deprecated * remove usage of as_user from sample code * add in the user customization arguments in lieu of the deprecated as_user argument * add comment regarding the permissions required to use these arguments * use kwargs * surface the whole response * allow usage of the deprecated argument but surface the failed response better * add to retry * delete test file * fix linting * formatting to fix tests * fix if style * add warning for using thread_ts * move the documentation to the optional arguments * #816 Airtable.get_records() fields argument can be either str or list (#892) * all fields to be a str object * remove newline * Nir's actionnetwork changes (#900) * working on adding a functio to an and took care of a lint issues * init * working on all get functions * actionnetwork functions batch 1 is ready * linting and black formatted compliance * removed unwanted/unsused lines * merged updated main * did some linting * added some more get functions to support all ActionNetwork objects (Advocacy Campaigns, Attendances, Campaigns, Custom Fields, Donations, Embeds, Event Campaigns, Events, Forms, Fundraising Pages, Items, Lists, Messages, Metadata, Outreaches, People, Petitions, Queries, Signatures, Submissions, Tags, Taggings, Wrappers) * worked on linting again * fix airtable.insert_records table arg (#907) * Add canales s3 functions (#885) * add raw s3 functions to parsons * add selected functions to s3.py * delte redundant functions and move drop_and_save function to redshift.py * create test file * add s3 unit tests * add rs.drop_and_unload unit test * add printing for debugging * remove testing file * unsaved changes * remove unused packages * remove unneeded module * Bump urllib3 from 1.26.17 to 1.26.18 (#904) Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](urllib3/urllib3@1.26.17...1.26.18) --- updated-dependencies: - dependency-name: urllib3 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: sharinetmc <[email protected]> * New connector for working with the Catalist Match API (#912) * Enable api_connector to return error message in `text` attribute Some API error responses contain the error message in the `text` attribute, so this update makes it possible to fetch that message if it exists. * New connector to work with the Catalist Match API * Add pytest-mock to requirements to support mocking in pytests * Tests on the catalist match connector * More open ended pytest-mock version for compatibility * Expand docstring documetation based on feedback in PR * More verbose error on match failure * Parameterize template_id variable * Expand docstrings on initial setup * Include Catalist documentation rst file * Enhancement: Action Network Connector: Added unpack_statistics param in get_messages method (#917) * Adds parameter to get_messages This adds the ability to unpack the statistics which are returned as a nested dictionary in the response. * added unpack_statistics to an.get_messages() * added parameters to get_messages and built tests * changes unpack_statistics to False by default. * added tbl variable * formatted with black * fixed docs --------- Co-authored-by: mattkrausse <[email protected]> * Adding rename_columns method to Parsons Table (#923) * added rename_columns for multiple cols * linted * added clarification to docs about dict structure * updated docs --------- Co-authored-by: mattkrausse <[email protected]> * Add http response to update_mailer (#924) Without returning the response, or at least the status code, it's impossible to check for errors. * Enable passing arbitrary additional fields to NGPVAN person match API (#916) * match gcs api to s3 * Revert "Merge branch 'main' into ianferguson/gcs-pathing" This reverts commit 5b1ef6e, reversing changes made to f0eb3d6. --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Kasia Hinkson <[email protected]> Co-authored-by: Jason <[email protected]> Co-authored-by: Austin Weisgrau <[email protected]> Co-authored-by: Cody Gordon <[email protected]> Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> Co-authored-by: willyraedy <[email protected]> Co-authored-by: Jason Walker <[email protected]> Co-authored-by: Shauna <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Cormac Martinez del Rio <[email protected]> Co-authored-by: sharinetmc <[email protected]> Co-authored-by: Angela Gloyna <[email protected]> Co-authored-by: NirTatcher <[email protected]> Co-authored-by: justicehaze <[email protected]> Co-authored-by: mattkrausse <[email protected]> Co-authored-by: mkrausse-ggtx <[email protected]> Co-authored-by: Sophie Waldman <[email protected]> * Bump version number to 3.0.0 * Fix import statement in test_bigquery.py * Add Tests to major-release Branch (#949) * add major release branch to gh workflow * add mac tests * null changes (want to trigger test) * remove temp change * Resolve GCP Test Failures For Major Release (#948) * add full import * resolve bigquery unzip test * remove keyword * fix flake8 errors * fix linting * push docs * fix flake8 * too long for flake8 * Install google-cloud-storage-transfer for google extras (#946) This is required for the import of storage_transfer to work Co-authored-by: Ian <[email protected]> * Revert "Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861)" (#945) This reverts commit 77ead60. Co-authored-by: Ian <[email protected]> * BigQuery - Add row count function to connector (#913) * add row count function * use sql * add unit test * unit test * whoops! * add examples (#952) * Parse Boolean types by default (#943) * Parse Boolean types by default Commit 766cfae created a feature for parsing boolean types but turned it off by default. This commit turns that feature on by default and adds a comment about how to turn it off and what that does. * Fix test expectations after updating boolean parsing behavior * Only ever interpret python bools as SQL booleans No longer coerce by default any of the following as booleans: "yes", "True", "t", 1, 0, "no", "False", "f" * Fix redshift test parsing bools * Move redshift test into test_databases folder * Remove retired TRUE_VALS and FALSE_VALS configuration variables We now only use python booleans --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Jason <[email protected]> Co-authored-by: Austin Weisgrau <[email protected]> Co-authored-by: Ian <[email protected]> Co-authored-by: Cody Gordon <[email protected]> Co-authored-by: sjwmoveon <[email protected]> Co-authored-by: Alex French <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> Co-authored-by: willyraedy <[email protected]> Co-authored-by: Jason Walker <[email protected]> Co-authored-by: sharinetmc <[email protected]> Co-authored-by: Kathy Nguyen <[email protected]> Co-authored-by: Kasia Hinkson <[email protected]> Co-authored-by: dexchan <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Cormac Martinez del Rio <[email protected]> Co-authored-by: Angela Gloyna <[email protected]> Co-authored-by: NirTatcher <[email protected]> Co-authored-by: justicehaze <[email protected]> Co-authored-by: mattkrausse <[email protected]> Co-authored-by: mkrausse-ggtx <[email protected]> Co-authored-by: Sophie Waldman <[email protected]>
1 parent a13b2ca commit 626edc7

27 files changed

+2186
-627
lines changed

.github/workflows/test-linux-windows.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@ name: tests
22

33
on:
44
pull_request:
5-
branches: ["main"]
5+
branches: ["main", "major-release"]
66
push:
7-
branches: ["main"]
7+
branches: ["main", "major-release"]
88

99
env:
1010
TESTING: 1

.github/workflows/tests-mac.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@ name: tests for mac
33

44
on:
55
pull_request:
6-
branches: ["main"]
6+
branches: ["main", "major-release"]
77
push:
8-
branches: ["main"]
8+
branches: ["main", "major-release"]
99

1010
env:
1111
TESTING: 1

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,7 @@ venv.bak/
110110
# scratch
111111
scratch*
112112
old!_*
113+
test.ipynb
113114

114115
# vscode
115116
.vscode/

CONTRIBUTING.md

Lines changed: 154 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,157 @@ You can contribute by:
1212
* [teaching and mentoring](https://www.parsonsproject.org/pub/contributing-guide#teaching-and-mentoring)
1313
* [helping "triage" issues and review pull requests](https://www.parsonsproject.org/pub/contributing-guide#maintainer-tasks)
1414

15-
If you're not sure how to get started, please ask for help! We're happy to chat and help you find the best way to get involved.
15+
We encourage folks to review existing issues before starting a new issue.
16+
17+
* If the issue you want exists, feel free to use the *thumbs up* emoji to up vote the issue.
18+
* If you have additional documentation or context that would be helpful, please add using comments.
19+
* If you have code snippets, but don’t have time to do the full write, please add to the issue!
20+
21+
We use labels to help us classify issues. They include:
22+
* **bug** - something in Parsons isn’t working the way it should
23+
* **enhancement** - new feature or request (e.g. a new API connector)
24+
* **good first issue** - an issue that would be good for someone who is new to Parsons
25+
26+
## Contributing Code to Parsons
27+
28+
Generally, code contributions to Parsons will be either enhancements or bug requests (or contributions of [sample code](#sample-code), discussed below). All changes to the repository are made [via pull requests](#submitting-a-pull-request).
29+
30+
If you would like to contribute code to Parsons, please review the issues in the repository and find one you would like to work on. If you are new to Parsons or to open source projects, look for issues with the [**good first issue**](https://github.com/move-coop/parsons/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) label. Once you have found your issue, please add a comment to the issue that lets others know that you are interested in working on it. If you're having trouble finding something to work on, please ask us for help on Slack.
31+
32+
The bulk of Parsons is made up of Connector classes, which are Python classes that help move data in and out of third party services. When you feel ready, you may want to contribute by [adding a new Connector class](https://move-coop.github.io/parsons/html/build_a_connector.html).
33+
34+
### Making Changes to Parsons
35+
36+
To make code changes to Parsons, you'll need to set up your development environment, make your changes, and then submit a pull request.
37+
38+
To set up your development environment:
39+
40+
* Fork the Parsons project using [the “Fork” button in GitHub](https://guides.github.com/activities/forking/)
41+
* Clone your fork to your local computer
42+
* Set up a [virtual environment](#virtual-environments)
43+
* Install the [dependencies](#installing-dependencies)
44+
* Check that everything's working by [running the unit tests](#unit-tests) and the [linter](#linting)
45+
46+
Now it's time to make your changes. We suggest taking a quick look at our [coding conventions](#coding-conventions) - it'll make the review process easier down the line. In addition to any code changes, make sure to update the documentation and the unit tests if necessary. Not sure if your changes require test or documentation updates? Just ask in Slack or through a comment on the relevant issue. When you're done, make sure to run the [unit tests](#unit-tests) and the [linter](#linting) again.
47+
48+
Finally, you'll want to [submit a pull request](#submitting-a-pull-request). And that's it!
49+
50+
#### Virtual Environments
51+
52+
If required dependencies conflict with packages or modules you need for other projects, you can create and use a [virtual environment](https://docs.python.org/3/library/venv.html).
53+
54+
```
55+
python3 -m venv .venv # Creates a virtual environment in the .venv folder
56+
source .venv/bin/activate # Activate in Unix or MacOS
57+
.venv/Scripts/activate.bat # Activate in Windows
58+
```
59+
60+
#### Installing Dependencies
61+
62+
Before running or testing your code changes, be sure to install all of the required Python libraries that Parsons depends on.
63+
64+
From the root of the parsons repository, use the run the following command:
65+
66+
```bash
67+
> pip install -r requirements.txt
68+
```
69+
70+
#### Unit Tests
71+
72+
When contributing code, we ask you to add to tests that can be used to verify that the code is working as expected. All of our unit tests are located in the `test/` folder at the root of the repository.
73+
74+
We use the pytest tool to run our suite of automated unit tests. The pytest command line tool is installed as part of the Parsons dependencies.
75+
76+
To run all the entire suite of unit tests, execute the following command:
77+
78+
```bash
79+
> pytest -rf test/
80+
```
81+
82+
Once the pytest tool has finished running all of the tests, it will output details around any errors or test failures it encountered. If no failures are identified, then you are good to go!
83+
84+
**Note:*** Some tests are written to call out to external API’s, and will be skipped as part of standard unit testing. This is expected.
85+
86+
See the [pytest documentation](https://docs.pytest.org/en/latest/contents.html) for more info and many more options.
87+
88+
#### Linting
89+
90+
We use the [black](https://github.com/psf/black) and [flake8](http://flake8.pycqa.org/en/latest/) tools to [lint](https://en.wikipedia.org/wiki/Lint_(software)) the code in the repository to make sure it matches our preferred style. Both tools are installed as part of the Parsons dependencies.
91+
92+
Run the following commands from the root of the Parsons repository to lint your code changes:
93+
94+
```bash
95+
> flake8 --max-line-length=100 --extend-ignore=E203,W503 parsons
96+
> black parsons
97+
```
98+
99+
Pre-commit hooks are available to enforce black and isort formatting on
100+
commit. You can also set up your IDE to reformat using black and/or isort on
101+
save.
102+
103+
To set up the pre-commit hooks, install pre-commit with `pip install
104+
pre-commit`, and then run `pre-commit install`.
105+
106+
#### Coding Conventions
107+
108+
The following is a list of best practices to consider when writing code for the Parsons project:
109+
110+
* Each tool connector should be its own unique class (e.g. ActionKit, VAN) in its own Python package. Use existing connectors as examples when deciding how to layout your code.
111+
112+
* Methods should be named using a verb_noun structure, such as `get_activist()` or `update_event()`.
113+
114+
* Methods should reflect the vocabulary utilized by the original tool where possible to mantain transparency. For example, Google Cloud Storage refers to file like objects as blobs. The methods are called `get_blob()` rather than `get_file()`.
115+
116+
* Methods that can work with arbitrarily large data (e.g. database or API queries) should use of Parson Tables to hold the data instead of standard Python collections (e.g. lists, dicts).
117+
118+
* You should avoid abbreviations for method names and variable names where possible.
119+
120+
* Inline comments explaining complex codes and methods are appreciated.
121+
122+
* Capitalize the word Parsons for consistency where possible, especially in documentation.
123+
124+
If you are building a new connector or extending an existing connector, there are more best practices in the [How to Build a Connector](https://move-coop.github.io/parsons/html/build_a_connector.html) documentation.
125+
126+
## Documentation
127+
128+
Parsons documentation is built using the Python Sphinx tool. Sphinx uses the `docs/*.rst` files in the repository to create the documentation.
129+
130+
We have a [documentation label](https://github.com/move-coop/parsons/issues?q=is%3Aissue+is%3Aopen+label%3Adocumentation) that may help you find good docs issues to work on. If you are adding a new connector, you will need to add a reference to the connector to one of the .rst files. Please use the existing documentation as an example.
131+
132+
When editing documentation, make sure you are editing the source files (with .md or .rst extension) and not the build files (.html extension).
133+
134+
The workflow for documentation changes is a bit simpler than for code changes:
135+
136+
* Fork the Parsons project using [the “Fork” button in GitHub](https://guides.github.com/activities/forking/)
137+
* Clone your fork to your local computer
138+
* Change into the `docs` folder and install the requirements with `pip install -r requirements.txt` (you may want to set up a [virtual environment](#virtual-environments) first)
139+
* Make your changes and re-build the docs by running `make html`. (Note: this builds only a single version of the docs, from the current files. To create docs with multiple versions like our publicly hosted docs, run `make deploy_docs`.)
140+
* Open these files in your web browser to check that they look as you expect.
141+
* [Submit a pull request](#submitting-a-pull-request)
142+
143+
When you make documentation changes, you only need to track the source files with git. The docs built by the html folder should not be included.
144+
145+
You should not need to worry about the unit tests or the linter if you are making documentation changes only.
146+
147+
## Contributing Sample Code
148+
149+
One important way to contribute to the Parsons project is to submit sample code that provides recipes and patterns for how to use the Parsons library.
150+
151+
We have a folder called `useful_resources/` in the root of the repository. If you have scripts that incorporate Parsons, we encourage you to add them there!
152+
153+
The workflow for adding sample code is:
154+
155+
* Fork the Parsons project using [the “Fork” button in GitHub](https://guides.github.com/activities/forking/)
156+
* Clone your fork to your local computer
157+
* Add your sample code into the `useful_resources/` folder
158+
* [Submit a pull request](#submitting-a-pull-request)
159+
160+
You should not need to worry about the unit tests or the linter if you are only adding sample code.
161+
162+
## Submitting a Pull Request
163+
164+
To submit a pull request, follow [these instructions to create a Pull Request from your fork](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork) back to the original Parsons repository.
165+
166+
The Parsons team will review your pull request and provide feedback. Please feel free to ping us if no one's responded to your Pull Request after a few days. We may not be able to review it right away, but we should be able to tell you when we'll get to it.
167+
168+
Once your pull request has been approved, the Parsons team will merge your changes into the Parsons repository

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,4 +42,4 @@ RUN python setup.py develop
4242
RUN mkdir /app
4343
WORKDIR /app
4444
# Useful for importing modules that are associated with your python scripts:
45-
env PYTHONPATH=.:/app
45+
ENV PYTHONPATH=.:/app

docs/airtable.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Overview
66
********
77

88
The Airtable class allows you to interact with an `Airtable <https://airtable.com/>`_ base. In order to use this class
9-
you must generate an Airtable API Key which can be found in your Airtable `account settings <https://airtable.com/account>`_.
9+
you must generate an Airtable personal access token which can be found in your Airtable `settings <https://airtable.com/create/tokens>`_.
1010

1111
.. note::
1212
Finding The Base Key
@@ -18,20 +18,20 @@ you must generate an Airtable API Key which can be found in your Airtable `accou
1818
**********
1919
QuickStart
2020
**********
21-
To instantiate the Airtable class, you can either store your Airtable API
22-
``AIRTABLE_API_KEY`` as an environmental variable or pass in your api key
21+
To instantiate the Airtable class, you can either store your Airtable personal access token
22+
``AIRTABLE_PERSONAL_ACCESS_TOKEN`` as an environmental variable or pass in your personal access token
2323
as an argument. You also need to pass in the base key and table name.
2424

2525
.. code-block:: python
2626
2727
from parsons import Airtable
2828
29-
# First approach: Use API credentials via environmental variables and pass
29+
# First approach: Use personal access token via environmental variable and pass
3030
# the base key and the table as arguments.
3131
at = Airtable(base_key, 'table01')
3232
33-
# Second approach: Pass API credentials, base key and table name as arguments.
34-
at = Airtable(base_key, 'table01', api_key='MYFAKEKEY')
33+
# Second approach: Pass personal access token, base key and table name as arguments.
34+
at = Airtable(base_key, 'table01', personal_access_token='MYFAKETOKEN')
3535
3636
3737
You can then call various endpoints:

docs/google.rst

Lines changed: 21 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ Google Cloud projects.
6868
Quickstart
6969
==========
7070

71-
To instantiate the GoogleBigQuery class, you can pass the constructor a string containing either the name of the Google service account credentials file or a JSON string encoding those credentials. Alternatively, you can set the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` to be either of those strings and call the constructor without that argument.
71+
To instantiate the `GoogleBigQuery` class, you can pass the constructor a string containing either the name of the Google service account credentials file or a JSON string encoding those credentials. Alternatively, you can set the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` to be either of those strings and call the constructor without that argument.
7272

7373
.. code-block:: python
7474
@@ -78,16 +78,18 @@ To instantiate the GoogleBigQuery class, you can pass the constructor a string c
7878
# be the file name or a JSON encoding of the credentials.
7979
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'google_credentials_file.json'
8080
81-
big_query = GoogleBigQuery()
81+
bigquery = GoogleBigQuery()
8282
8383
Alternatively, you can pass the credentials in as an argument. In the example below, we also specify the project.
8484

8585
.. code-block:: python
8686
8787
# Project in which we're working
8888
project = 'parsons-test'
89-
big_query = GoogleBigQuery(app_creds='google_credentials_file.json',
90-
project=project)
89+
bigquery = GoogleBigQuery(
90+
app_creds='google_credentials_file.json',
91+
project=project
92+
)
9193
9294
We can now upload/query data.
9395

@@ -98,7 +100,7 @@ We can now upload/query data.
98100
99101
# Table name should be project.dataset.table, or dataset.table, if
100102
# working with the default project
101-
table_name = project + '.' + dataset + '.' + table
103+
table_name = f"`{project}.{dataset}.{table}`"
102104
103105
# Must be pre-existing bucket. Create via GoogleCloudStorage() or
104106
# at https://console.cloud.google.com/storage/create-bucket. May be
@@ -107,23 +109,31 @@ We can now upload/query data.
107109
gcs_temp_bucket = 'parsons_bucket'
108110
109111
# Create dataset if it doesn't already exist
110-
big_query.client.create_dataset(dataset=dataset, exists_ok=True)
112+
bigquery.client.create_dataset(dataset=dataset, exists_ok=True)
111113
112114
parsons_table = Table([{'name':'Bob', 'party':'D'},
113115
{'name':'Jane', 'party':'D'},
114116
{'name':'Sue', 'party':'R'},
115117
{'name':'Bill', 'party':'I'}])
116118
117119
# Copy table in to create new BigQuery table
118-
big_query.copy(table_obj=parsons_table,
119-
table_name=table_name,
120-
tmp_gcs_bucket=gcs_temp_bucket)
120+
bigquery.copy(
121+
table_obj=parsons_table,
122+
table_name=table_name,
123+
tmp_gcs_bucket=gcs_temp_bucket
124+
)
121125
122126
# Select from project.dataset.table
123-
big_query.query(f'select name from {table_name} where party = "D"')
127+
bigquery.query(f'select name from {table_name} where party = "D"')
128+
129+
# Query with parameters
130+
bigquery.query(
131+
f"select name from {table_name} where party = %s",
132+
parameters=["D"]
133+
)
124134
125135
# Delete the table when we're done
126-
big_query.client.delete_table(table=table_name)
136+
bigquery.client.delete_table(table=table_name)
127137
128138
===
129139
API

parsons/airtable/airtable.py

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,17 @@ class Airtable(object):
1515
table_name: str
1616
The name of the table in the base. The table name is the equivilant of the sheet name
1717
in Excel or GoogleDocs.
18-
api_key: str
19-
The Airtable provided api key. Not required if ``AIRTABLE_API_KEY`` env variable set.
18+
personal_access_token: str
19+
The Airtable personal access token. Not required if ``AIRTABLE_PERSONAL_ACCESS_TOKEN``
20+
env variable set.
2021
"""
2122

22-
def __init__(self, base_key, table_name, api_key=None):
23+
def __init__(self, base_key, table_name, personal_access_token=None):
2324

24-
self.api_key = check_env.check("AIRTABLE_API_KEY", api_key)
25-
self.client = client(base_key, table_name, self.api_key)
25+
self.personal_access_token = check_env.check(
26+
"AIRTABLE_PERSONAL_ACCESS_TOKEN", personal_access_token
27+
)
28+
self.client = client(base_key, table_name, self.personal_access_token)
2629

2730
def get_record(self, record_id):
2831
"""

parsons/databases/database/constants.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -153,11 +153,7 @@
153153

154154
VARCHAR = "varchar"
155155
FLOAT = "float"
156-
157-
DO_PARSE_BOOLS = False
158156
BOOL = "bool"
159-
TRUE_VALS = ("TRUE", "T", "YES", "Y", "1", 1)
160-
FALSE_VALS = ("FALSE", "F", "NO", "N", "0", 0)
161157

162158
# The following values are the minimum and maximum values for MySQL int
163159
# types. https://dev.mysql.com/doc/refman/8.0/en/integer-types.html

0 commit comments

Comments
 (0)