squashed to keep the history more readable
commit b4383d81f48f9c58cb53119cb453034c5676657f
Author: Mike McLean <mikem@redhat.com>
Date: Fri Jun 21 09:03:07 2024 -0400
unit tests
commit 151b6ea053fc2e93b104fb3f01749602401fa0ee
Author: Mike McLean <mikem@redhat.com>
Date: Tue Jun 18 17:55:35 2024 -0400
unit tests and fixes
commit 15457499665a0c0e0e45b17d19c6d07b6f681ca8
Author: Mike McLean <mikem@redhat.com>
Date: Tue Jun 18 17:14:01 2024 -0400
use tag name in waitrepo task for readability
commit a20a21d39d2cb96b02046788de77aa33a7cbc906
Author: Mike McLean <mikem@redhat.com>
Date: Tue Jun 18 17:00:45 2024 -0400
cleanup
commit a0058fce436a39de5cde6f11788ca4aaaa3553c0
Author: Mike McLean <mikem@redhat.com>
Date: Tue Jun 18 16:44:22 2024 -0400
better approach to repo lookup from task id
commit 057527d71318d4494d80a2f24510e82ac9bc33f8
Author: Mike McLean <mikem@redhat.com>
Date: Tue Jun 18 10:42:08 2024 -0400
support priority for requests
commit 882eaf2c4349e6f75db055fa36c80d66ab40526f
Author: Mike McLean <mikem@redhat.com>
Date: Tue Jun 18 10:16:44 2024 -0400
track user for request
commit 273739e2f43170d80dae9e3796185230fae0607e
Author: Mike McLean <mikem@redhat.com>
Date: Mon Jun 17 15:37:16 2024 -0400
update additional fields in repo_done_hook
commit d0a886eb161468675720549ad8a31921cd5c3647
Author: Mike McLean <mikem@redhat.com>
Date: Mon Jun 17 15:14:38 2024 -0400
simplify updateRepos
commit 2a3ab6839299dd507835804e6326d93f08aa4040
Author: Mike McLean <mikem@redhat.com>
Date: Mon Jun 17 15:03:39 2024 -0400
kojira: adjust cleanup of self.repos
commit dfc5934423b7f8f129ac9c737cc21d1798b33c2d
Author: Mike McLean <mikem@redhat.com>
Date: Mon Jun 17 14:03:57 2024 -0400
docs updates
commit 4c5d4c2b50b11844d5dd6c8295b33bcc4453928b
Author: Mike McLean <mikem@redhat.com>
Date: Mon Jun 17 09:18:10 2024 -0400
Apply repo_lifetime to custom repos even if current
commit 2b2d63a771244358f4a7d77766374448343d2c4c
Author: Mike McLean <mikem@redhat.com>
Date: Mon Jun 17 09:36:50 2024 -0400
fix migration script
commit 447a3f47270a324463a335d19b8e2c657a99ee9b
Author: Tomas Kopecek <tkopecek@redhat.com>
Date: Fri Jun 7 11:32:14 2024 +0200
migration script
commit f73bbe88eea7caf31c908fdaa5231e39d0f0d0a8
Author: Mike McLean <mikem@redhat.com>
Date: Fri Jun 14 15:30:24 2024 -0400
clean up some TODO items
commit 836c89131d2b125c2761cfbd3917473504d459e4
Author: Mike McLean <mikem@redhat.com>
Date: Fri Jun 14 11:43:13 2024 -0400
update unit tests
commit 4822ec580b96ae63778b71cee2127364bc31d258
Author: Mike McLean <mikem@redhat.com>
Date: Fri Jun 14 11:17:24 2024 -0400
streamline simple case for tag_first/last_change_event
commit 3474384c56a8a2e60288279b459000f3b9c54968
Author: Mike McLean <mikem@redhat.com>
Date: Tue Jun 11 16:11:55 2024 -0400
backwards compatible age checks in kojira
commit e796db0bdc6e70b489179bcddaa899855d64b706
Author: Mike McLean <mikem@redhat.com>
Date: Fri Jun 14 11:49:37 2024 -0400
repowatch unit test fixes
commit 7f17eb741502ab5417f70413f699c99e140f380d
Author: Mike McLean <mikem@redhat.com>
Date: Thu Jun 6 21:35:11 2024 -0400
adjust watch output; die if request fails
commit a0318c44576d6acab459f623c8ff0ab6961bd6b4
Author: Mike McLean <mikem@redhat.com>
Date: Thu Jun 6 20:45:56 2024 -0400
handle problem repos
commit d90ca6f9d41a39da86089a0fad7afdb649fd680b
Author: Mike McLean <mikem@redhat.com>
Date: Thu May 30 22:43:56 2024 -0400
fix typos
commit 29830d1b8125664ddeae5ccb7e6b6e53260cdc47
Author: Mike McLean <mikem@redhat.com>
Date: Thu May 30 16:57:48 2024 -0400
clarify --wait-repo help text
commit 43db92302643b67e7f6f419424d6813e5dca53f3
Author: Mike McLean <mikem@redhat.com>
Date: Tue May 21 17:32:44 2024 -0400
unit tests
commit 27f979fbccc5a286fba9caeec16ca7092fa79813
Author: Mike McLean <mikem@redhat.com>
Date: Tue May 21 17:23:32 2024 -0400
wait-repo compat
commit f3a8f76d9340b1bdddb5f7bab154962e848d4d10
Author: Mike McLean <mikem@redhat.com>
Date: Thu May 16 20:14:59 2024 -0400
fixes
commit 6638b0fd76b31aa49ad0cf79639014ad9ace09f0
Author: Mike McLean <mikem@redhat.com>
Date: Thu May 16 16:41:50 2024 -0400
use old regen-repo code for older hubs
commit 7f2d8ec49fe1d2d511759221a821a146a4ef6837
Author: Mike McLean <mikem@redhat.com>
Date: Thu May 16 16:18:36 2024 -0400
fixes
commit 791df709c10d3c10c9b79f59f4fda435ac3bd285
Author: Mike McLean <mikem@redhat.com>
Date: Thu May 16 12:22:09 2024 -0400
don't trigger regens from scheduler. kojira is enough
commit 75f5e695287b92d53e4f173f57b12b5a7159adaf
Author: Mike McLean <mikem@redhat.com>
Date: Wed May 15 22:54:08 2024 -0400
more docs
commit 0e0f53160bbe09e35409dabce63739eb50813310
Author: Mike McLean <mikem@redhat.com>
Date: Wed May 15 21:49:27 2024 -0400
support MaxRepoTasksMaven
commit 88da9639860cb7c0d92f7c3bc881cd480b4e1620
Author: Mike McLean <mikem@redhat.com>
Date: Wed May 15 16:15:12 2024 -0400
drop unused method
commit 4cdbe6c4d2ba8735312d0cd0095612c159db9cce
Author: Mike McLean <mikem@redhat.com>
Date: Wed May 15 15:48:55 2024 -0400
api for querying repo queue
commit 2367eb21e60865c8e5a2e19f2f840938dbbbc58b
Author: Mike McLean <mikem@redhat.com>
Date: Wed May 15 15:24:44 2024 -0400
flake8
commit 811378d703a68b63c577468b85f4a49a9be2c441
Author: Mike McLean <mikem@redhat.com>
Date: Tue May 14 16:20:59 2024 -0400
record custom opts in repo.json
commit d448b6b3417e95bff2bae3b5a3790877ac834816
Author: Mike McLean <mikem@redhat.com>
Date: Mon May 13 15:32:33 2024 -0400
drop unused RawClauses code
will revisit in a later PR
commit 0422220e05ee3d43e5431a0d741f3632f42a8434
Author: Mike McLean <mikem@redhat.com>
Date: Sat May 11 13:34:12 2024 -0400
clean up BulkUpdateProcessor and add tests
commit 6721f847e655a3794d4f2fce383070cb6ad2d2d1
Author: Mike McLean <mikem@redhat.com>
Date: Fri May 10 17:43:17 2024 -0400
fix unit test after rebase
commit 833286eead2b278a99fe9ef80c13df88ca3af48c
Author: Mike McLean <mikem@redhat.com>
Date: Fri Apr 5 00:23:15 2024 -0400
adjust valid_repo opts checks
commit 7f418d550d8636072292ee05f6e9748b622c2d89
Author: Mike McLean <mikem@redhat.com>
Date: Fri Apr 5 00:03:33 2024 -0400
extend valid_repo unit test and fix a bug
commit eb844ba15894cb7fc2a739908e7d83c80fd82524
Author: Mike McLean <mikem@redhat.com>
Date: Thu Apr 4 15:41:08 2024 -0400
test_request_existing_req_invalid
commit 2e290453abf9ac31f51a1853aa123a2a34ad9605
Author: Mike McLean <mikem@redhat.com>
Date: Thu Apr 4 15:22:06 2024 -0400
test_request_at_event
commit 2c3389c24f5cabfbbaeb70512a4ba917cf5bd09b
Author: Mike McLean <mikem@redhat.com>
Date: Thu Apr 4 11:14:37 2024 -0400
test_request_new_req
commit 2cdeab9b5f5b0bff4c4806ae802e5f5e571bb25e
Author: Mike McLean <mikem@redhat.com>
Date: Thu Apr 4 10:56:36 2024 -0400
test_request_existing_req
commit 63c9ddab5f3e50b3537a82f390e9da5a66275a25
Author: Mike McLean <mikem@redhat.com>
Date: Thu Apr 4 10:45:22 2024 -0400
test_request_existing_repo
commit 03b5ba5c57ce1ade0cf7990d23ec599c8cb19482
Author: Mike McLean <mikem@redhat.com>
Date: Thu Apr 4 10:04:36 2024 -0400
more stubs
commit 92d16847f2cc2db0d8ee5afcf2d812b9bb6467ec
Author: Mike McLean <mikem@redhat.com>
Date: Wed Apr 3 22:44:00 2024 -0400
fix import
commit 1f621685532564a1c1ac373e98bec57c59107e6c
Author: Mike McLean <mikem@redhat.com>
Date: Wed Apr 3 22:16:25 2024 -0400
stub test
commit 45eef344e701c910f172d5642676d8f70d44049a
Author: Mike McLean <mikem@redhat.com>
Date: Wed Apr 3 22:01:31 2024 -0400
link repo doc in toc
commit bfffe233051c71785c335a82f64bf2abaae50078
Author: Mike McLean <mikem@redhat.com>
Date: Wed Apr 3 21:57:35 2024 -0400
unused options
commit 19f5a55faecf8229d60d21fd3e334e9a7f813384
Author: Mike McLean <mikem@redhat.com>
Date: Wed Apr 3 16:37:50 2024 -0400
include new setting
commit b7f81bd18016f862d1246ab6c81172fcd9c8b0ed
Author: Mike McLean <mikem@redhat.com>
Date: Wed Apr 3 08:21:16 2024 -0400
test + fixes
commit 16564cfb8e2725b395c624139ce3d878a6dd9d53
Author: Mike McLean <mikem@redhat.com>
Date: Wed Apr 3 07:44:15 2024 -0400
more kojira unit tests
commit 6b55c51302331ea09a126b9f3efbc71da164c0fb
Author: Mike McLean <mikem@redhat.com>
Date: Wed Apr 3 07:06:20 2024 -0400
fix unit test
commit 0b000c124b17f965c5606d30da792ba47db542cf
Author: Mike McLean <mikem@redhat.com>
Date: Tue Apr 2 22:07:08 2024 -0400
refactor repo delete
commit 0a03623fb018c80c8d38896fc99686cac56307fa
Author: Mike McLean <mikem@redhat.com>
Date: Tue Apr 2 19:13:15 2024 -0400
avoid circular import issue
commit 137d699b7653977f63f30041d9f5f1a88ae08d43
Author: Mike McLean <mikem@redhat.com>
Date: Tue Apr 2 19:03:18 2024 -0400
some kojira cleanup
commit 252e69d6dd17bb407b88b79efbb243ca5e441765
Author: Mike McLean <mikem@redhat.com>
Date: Tue Apr 2 17:21:14 2024 -0400
adjust state transition check
commit 336018081709fd44e7f12933b1ea59e02bff4aed
Author: Mike McLean <mikem@redhat.com>
Date: Tue Apr 2 16:05:45 2024 -0400
update RepoQuery
commit 68bb44848d9024c5520d8e7e2cc262adaa083cd1
Author: Mike McLean <mikem@redhat.com>
Date: Tue Mar 12 11:46:59 2024 -0400
decode query bytes in log
commit 818431fb9b09db162e73f7cb1adcddc8b151c821
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 29 14:47:16 2024 -0400
sanity check requests before reusing
commit 63fee0ba1ea9d41d504bb09aeaea064246c16ff9
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 29 11:41:13 2024 -0400
repo.query api call
commit bcf9a3cf64167612e3cd355aae7c41dd348cb8db
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 29 10:31:58 2024 -0400
reduce some cli code duplication
commit 3e870cfd088c69c4aaaa9a0f938bcce740b3f42c
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 28 18:27:18 2024 -0400
tweak warnings in external repo check
commit 0dfda64b806f2377d9c591105c83a4f05851b17a
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 28 14:43:50 2024 -0400
clean repo queue
commit e5d328faa00c74e087f0b0d20aea7cd79ffb5ee4
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 28 14:05:12 2024 -0400
implement retry limit for repo queue
commit 2185f3c9e32747c9657f2b9eb9ce6e3ca6d06ff8
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 27 22:40:13 2024 -0400
cleanup a few TODOs
commit b45be8c44367bca9819561a0e928999b9a9e2428
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 27 22:22:17 2024 -0400
tweak test
commit 546b161e20d0b310462dda705ae688e25b385cf5
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 27 13:43:06 2024 -0400
more kojira tests
commit f887fdd12e59e36be561c1a89687a523e112b9d4
Author: Mike McLean <mikem@redhat.com>
Date: Tue Mar 26 20:16:11 2024 -0400
unit tests for RepoWatcher
commit e78b41431f3b45ae9e09d9a246982df9bb2c2374
Author: Mike McLean <mikem@redhat.com>
Date: Tue Mar 26 10:53:14 2024 -0400
fix unit tests
commit 64328ecb27e5598ec8977617e67d6dd630bc8db7
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 25 14:03:19 2024 -0400
custom opts sorted out?
commit e3cee8c48bcf585a1a14aa8e56e43aaba2ccd63b
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 25 12:50:34 2024 -0400
allow containment operator
commit bef7bbc3b2a16a6643bedb47be044c202a2bad2d
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 25 11:59:15 2024 -0400
partial
commit 01788dfe386a07960c5c7888350e3917b44a0bab
Author: Mike McLean <mikem@redhat.com>
Date: Sat Mar 23 13:47:22 2024 -0400
fragment: struggling with repo opt timing
commit 44504bfbde4cf981391ea02127a05c4f0c2fc4a3
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 17:14:57 2024 -0400
fine to have default values in the class
commit 1bfa520dd599acccd45f221f71c64fbefc3b5554
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 17:14:18 2024 -0400
option renamed
commit a5db9d015a25f71fdb5e2dadcae55a8c5b7ec956
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 17:04:32 2024 -0400
flake8
commit c02244f8018b651f309f39eb60f926209454dea2
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 16:59:15 2024 -0400
more config options in repos.py
commit 9bf3edc0cf2c85a23964b79c4489bc9592656f16
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 15:39:52 2024 -0400
use requests by default in regen-repo
commit 78c6e8a4459856fa333763b1977633307fd81cc3
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 13:49:00 2024 -0400
adjust watch_fields
commit eadb2a24b9e0f324ac053c4bdede0865d4ed5bfa
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 12:27:23 2024 -0400
adjust event validation
commit 3140e73cfccdcc25765c6f330073c991a44cbd9a
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 12:01:24 2024 -0400
wait-repo tweaks
commit d1a8174cdd917bbf74882c51f1a7eaf4f02e542a
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 10:35:28 2024 -0400
cli: wait-repo-request command
commit b2d08ac09880a1931b7f40b68d5ca765cd49a3a6
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 10:04:46 2024 -0400
drop complex request options from wait-repo
commit b4ab55f241a693c0c0d08e386f998394a295fc7c
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 09:36:37 2024 -0400
fix call
commit c04417439c4684342ac0d4423b341d363bc80e92
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 22 09:32:48 2024 -0400
typo
commit 29be83b1523d45eb77cfe4959c9d6bc5c940ebbe
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 20 07:28:12 2024 -0400
partial...
commit cd0ba3b6c2c47fe5bac4cf823b886462e092e2b3
Author: Mike McLean <mikem@redhat.com>
Date: Tue Mar 19 23:13:47 2024 -0400
drop event="new" code
commit 7f4f2356eceec03228e4a92b13e5593f956c390d
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 18 21:00:25 2024 -0400
kojira on demand work
squashed because the branch was getting unwieldy
mostly working at this point, but there is a bit out outstanding work
commit e127878460a932cc77c399f69c40f0993c765dc7
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 18 11:20:33 2024 -0400
stale comment
commit d0849d50b865f4f3783ddde5e1e6cf10db56ed39
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 23:58:13 2024 -0400
don't expire at_event repos
commit 8866db0e25b072aa12cc2827c62093b000fa7897
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 23:43:24 2024 -0400
typo
commit e2a5fd639b88c7b88708e782f0b7398296d2f805
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 23:40:08 2024 -0400
repos.py: support at_event
commit 6518f1656976ea2beb2cf732c82db0f159b09d15
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 22:20:35 2024 -0400
update repo symlink logic
commit 50d5e179f56393dd52c7225fc6f053d0095e9599
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 22:20:01 2024 -0400
...
commit 429fc85b391e0b5e637e20859f1094a37a5eab39
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 21:18:44 2024 -0400
block owner opt in makeTask and host.subtask
commit 40fcfe667ef70987444756f6d5554919d89fb1de
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 20:49:37 2024 -0400
db lock for repo queue
commit dfd94fac8fb96328b12bcf2f8f6f7e2d52deea85
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 17:47:39 2024 -0400
...
commit ecd9611e5d84d8a98920c40805616a6376ca652e
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 17:45:38 2024 -0400
move new exports around
commit a2e086df07f7b03dc4505a61f9b213e6e2ff20a5
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 16:46:29 2024 -0400
drop noisy debug line
commit 497bd773baa274d205df3bba317ee80617cc56a0
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 16:20:56 2024 -0400
...
commit 457c986894de754a927bc4880687e0f47c29cbdd
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 16:19:12 2024 -0400
...
commit 3aa0fa4862b37b7d178b1b7bb9a521ea01e7dded
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 16:18:30 2024 -0400
...
commit 391c2009671dea1270cce01666d04ad2ade0c323
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 16:15:32 2024 -0400
...
commit f3794e2acc8eef38e0c65fb27d3b2b3a58f53311
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 16:12:53 2024 -0400
...
commit aea5e1a91f9246cce5f162bbea3d4846e87b9811
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 16:11:53 2024 -0400
...
commit dc68ed8f0a43c9418c0c813f05a761bc8303c2b0
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 16:10:34 2024 -0400
typo
commit 73c72c8ed08744a188e4ae977b7ba2d92c75401b
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 16:08:15 2024 -0400
pruning tweaks
commit d3a10f8d5ef77a86db0e64a845f360d9f2cc2e17
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 15:50:57 2024 -0400
kojira: use ordered dict for delete queue
commit f6d7d44bac22840ee3ae1a93375c3b5ad430869c
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 14:59:05 2024 -0400
rework repo expiration and lifetimes a bit
commit 8bb91611c05ccb5d91910718a07494c08665ec22
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 00:27:34 2024 -0400
more kojira rework
commit 368d25a31d61eae8712591183bd2db1ff78f59d1
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 17 00:27:17 2024 -0400
cleanup
commit 292a1e4fdcc4098137156a42072e5bfda2f711df
Author: Mike McLean <mikem@redhat.com>
Date: Sat Mar 16 23:51:45 2024 -0400
track update time for repos
commit 01a7469ef7bcd952f45d732e4bb3b5f4bab2338a
Author: Mike McLean <mikem@redhat.com>
Date: Sat Mar 16 17:42:42 2024 -0400
factor in implicit joins for fields="*"
commit f9aba4557108b2005cf518e4bf316befa7f29911
Author: Mike McLean <mikem@redhat.com>
Date: Sat Mar 16 15:25:34 2024 -0400
partial repo docs
commit 74eae7104849237a4049a78c94b05187a2219f74
Author: Mike McLean <mikem@redhat.com>
Date: Sat Mar 16 13:17:36 2024 -0400
remove some obsolete code from kojira
commit d883807967a0d6d67a6e262a119ff5e03b8a947e
Author: Mike McLean <mikem@redhat.com>
Date: Sat Mar 16 11:42:48 2024 -0400
...
commit 3bc3aa98913463aa209bba1cecc71fc30f6ef42f
Author: Mike McLean <mikem@redhat.com>
Date: Sat Mar 16 11:12:50 2024 -0400
do_auto_repos
commit da69f05555f05ded973b4ade064ed7e5f7e70acd
Author: Mike McLean <mikem@redhat.com>
Date: Fri Feb 23 14:56:30 2024 -0500
fakehub: option to override config
commit 13a4ffdf9cd915b6af7b85120d87d50b8f6db5ed
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 15 22:35:50 2024 -0400
tweak logging
commit 01af487cced25c0edaa9e98e5dc7bb7dc9c4d6bd
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 15 22:16:21 2024 -0400
adjust archlist for external repo check
commit eb1c66f57a508f65dcac0e32cfaa3e178ed40bad
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 15 18:45:53 2024 -0400
tweak logging; wait-repo --new
commit 3dab52d497926a6be80a3c98cc29f0cb6478926f
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 15 15:03:23 2024 -0400
typo
commit 503365a79998aa2ee0eb2bd9b412747cdec50ab1
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 14 00:17:24 2024 -0400
...
commit 46ec62e96334690344de18d535f7b9c4fd87d877
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 14 00:16:09 2024 -0400
separate get/set for erepo data
commit 25c2861509cfebcfc38be5fff6c0b382dfcca224
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 13 09:08:45 2024 -0400
only update erepo data in db if it changed
commit bc5db7494a486ae39b99dba4875547a8e8bc1ee0
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 13 09:03:03 2024 -0400
...
commit 55b947fe2889dcb3b6112e9e80de926ef0ab70fa
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 13 08:48:45 2024 -0400
partial work
commit 7e91985a378754ae2ba88e0e2182bdf6302416ef
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 13 08:22:23 2024 -0400
handle external_repo_data history in cli
commit 0aeae31215af98ea8580307750389873f1e2521e
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 13 08:15:50 2024 -0400
set_external_repo_data
commit d85e93c0c294770d2384a41a3f2c09b4a64ae3c4
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 13 07:58:18 2024 -0400
support external_repo_data in query_history
commit 88fcf7ac5b8893bd045af017df1eb22a3cce8cb0
Merge: 8449ebfeb eba8de247
Author: Mike McLean <mikem@redhat.com>
Date: Tue Mar 12 00:01:57 2024 -0400
Merge remote-tracking branch 'origin' into kojira-on-demand
commit 8449ebfeb7976f5a5bfea78322c536cf0db6aa54
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 11 23:56:25 2024 -0400
drop stray file
commit 3d3716454b9f12c1807f8992ecd01cde3d9aade9
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 11 23:49:20 2024 -0400
flake8
commit f9014b6b689e5a1baf355842cf13905b8c50c3d8
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 11 23:44:32 2024 -0400
handle deleted tags sanely in tag_last_change_event
commit 7d584e99a1a580039d18210c2cc857eb3419394f
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 11 14:50:07 2024 -0400
typo
commit 6ac5921ce55ed356ba8c66466ebf56bb424591a9
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 11 14:49:35 2024 -0400
add external_repo_data table. check ext repo tables for first/last tag change events
commit e107400463679113971daaa400d75ec006f4dca5
Author: Mike McLean <mikem@redhat.com>
Date: Mon Mar 11 12:14:21 2024 -0400
fix newer_than logic in WaitrepoTask
commit 4a1175a35e6ad7c59b3622a6028e2cd68e29bb79
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 10 23:47:29 2024 -0400
todos
commit c13d9e99d19bc40e59fd136b540b6a8c6e12a50f
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 10 23:30:59 2024 -0400
AllowNewRepo hub config
commit e3176cda238d3357fed0b905b03dfc0319dab12e
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 10 23:00:45 2024 -0400
fixes
commit d486960a441fbb517492a61ef2529370035a765a
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 10 22:48:00 2024 -0400
request min_event never null or in future
commit 4cc0d38b8e4bf1254bb156d085614f83929e1161
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 10 22:32:45 2024 -0400
...
commit bb0dc41cd6be4c42d4cd033e07210f1184c2c385
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 10 22:23:52 2024 -0400
default min_event. don't allow future events
commit 1dccf0a56b1e3f83107760111264249527abeb68
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 10 17:27:11 2024 -0400
use BulkUpdateProcessor in update_end_events
commit 03c791edd3bb49359f2a01eaf53cbb717c53833e
Author: Mike McLean <mikem@redhat.com>
Date: Sun Mar 10 17:26:26 2024 -0400
BulkUpdateProcessor
commit 4bd2a0da1c998ce14fd856e68318551747867e06
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 8 14:53:53 2024 -0500
update_end_events()
commit b45b13bcba141ea6b30618fb76c1a94593dfe569
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 8 13:03:33 2024 -0500
record begin/end events in repo_init
commit 6f1adf51d9e24f80369df8b96010c0d6d123b448
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 8 12:33:40 2024 -0500
QueryView: accept single field value
commit 6b292d9a4b1bda56ff8091fbcb126749f952d045
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 8 12:28:02 2024 -0500
adjust query fields
commit e9e8e74703de8b6c531944c05d54447f0d7cb13f
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 8 12:18:12 2024 -0500
QueryView: adjust special field name handling
commit 97d910d70634183a3d5ae804176a5c8691882b7a
Author: Mike McLean <mikem@redhat.com>
Date: Fri Mar 8 11:45:54 2024 -0500
adjust event fields
commit c70d34805227a61ab96176537dae64db3883e58f
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 23:37:29 2024 -0500
honor owner opt to make_task
commit 40601d220179eb9718023002f8811ce5cbd09860
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 23:29:50 2024 -0500
...
commit 6f84ca3aa8c24d4618294027dce7a23620a3e2d7
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 23:24:22 2024 -0500
typo
commit c423b8a4cc5fd4ed5c762e7b5adc06449c72ea70
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 23:22:18 2024 -0500
use kojira user for repo tasks
commit 63dacff462ce064bbdf0b5c6e8ef14b2abe08e0c
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 23:05:12 2024 -0500
hook to fulfill requests when repos are marked ready
commit aa79055c1e404a4c4fa9ac894fe978c8f9827f72
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 01:08:19 2024 -0500
no more data field
commit 7dd029fb94e24004793e2d1232b3225b3cee5c97
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 01:01:41 2024 -0500
use full opts in request entries too
commit 73dc2f232b231467d12355af0ace14284f5422a8
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 00:54:41 2024 -0500
...
commit 414d0a55cf66d93b6fb79e9677f68fd141edc655
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 00:54:01 2024 -0500
propagate opts in repo_init
commit 99c1dde4771164d215f8c9a9acc0dadb678d047b
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 00:20:57 2024 -0500
include opts in query
commit 08289b3444612920856e6a949a379f61cb46b5e7
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 00:15:12 2024 -0500
missing import
commit bc3ca72c084b8e8de678ecbdcf6bbcfe972363e1
Author: Mike McLean <mikem@redhat.com>
Date: Thu Mar 7 00:10:45 2024 -0500
more opts support
commit f7c12cfe5f5b6c6c7895cd5eb4cdeb45757022a1
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 6 23:59:08 2024 -0500
handle repo opts in request call
commit 02a75f3996d59ae36f046327fca766e8799ef35b
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 6 22:01:06 2024 -0500
fix import
commit 7fe52dc83a80c0f68580d274bd2e60c57ab2e26d
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 6 21:58:59 2024 -0500
fix fields
commit f016c3a46d901ca762f5e8824fcd5efad2eede57
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 6 21:47:40 2024 -0500
move code into kojihub/repos
commit 9953009d3cc6f08cd16cbaa593ae79796ac86fa2
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 6 21:15:17 2024 -0500
more unit test fixes
commit f5decfaff3f56601262752e8a06b6f97bc4cfb33
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 6 20:51:07 2024 -0500
unit test
commit b51d4979824abe6ddc402011d21394854f46687e
Author: Mike McLean <mikem@redhat.com>
Date: Wed Mar 6 20:19:06 2024 -0500
flake8
commit aeee5b59df4e9da93db83874f022419c24b37162
Author: Mike McLean <mikem@redhat.com>
Date: Tue Feb 20 18:05:25 2024 -0500
stub: tracking opts
commit b5c150b52f575c681bdacb4c87e690653edc465a
Author: Mike McLean <mikem@redhat.com>
Date: Mon Feb 19 15:11:40 2024 -0500
different approach for raw clauses
commit a9001c97935f3ad90571589688b1f291242bad08
Author: Mike McLean <mikem@redhat.com>
Date: Mon Feb 19 14:32:57 2024 -0500
and any necessary values and joins
commit 84a46633b7dc1303e48367b614b99de3730a865d
Author: Mike McLean <mikem@redhat.com>
Date: Mon Feb 19 14:17:12 2024 -0500
give hub code a way to raw clauses with QueryView
commit 5d43c18f56563fc14f12d12c57f044125a5b33f9
Author: Mike McLean <mikem@redhat.com>
Date: Mon Feb 19 14:09:27 2024 -0500
private vars
commit 91992f2e7b0a6cdd5e7cf8b99f6c37cfb20b08a6
Author: Mike McLean <mikem@redhat.com>
Date: Mon Feb 19 14:02:07 2024 -0500
saner data from get_fields
commit 1e581cd5a5f3a6e257c3147a8ea763987984403c
Author: Mike McLean <mikem@redhat.com>
Date: Mon Feb 19 13:26:34 2024 -0500
update test and include tag_first_change_event()
commit 3509300b0b1c6bb516b5552f2b1d37008231efae
Author: Mike McLean <mikem@redhat.com>
Date: Mon Feb 19 12:42:53 2024 -0500
revert global verbose option
commit 4173e8610b0beed3dcea14849da1f115eb43c293
Author: Mike McLean <mikem@redhat.com>
Date: Mon Feb 19 07:59:48 2024 -0500
better ordering support in QueryView
commit 359543b95cd524d5f4d8d82854680452ee07fd00
Author: Mike McLean <mikem@redhat.com>
Date: Sun Feb 18 01:19:30 2024 -0500
also include test from multirepo
commit 1ceb8c01f92cfe5029c78688b14f643e1fa8be12
Author: Mike McLean <mikem@redhat.com>
Date: Sun Feb 18 00:18:39 2024 -0500
constraint
commit 064bfc18b3a07edd602192bc4f48ac52adeedc3f
Author: Mike McLean <mikem@redhat.com>
Date: Sun Feb 18 00:00:15 2024 -0500
tagFirstChangeEvent, plus fix
commit 0efbfed21ec3b66841a7e4996e59bc8aaeed352b
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 22:37:08 2024 -0500
fix
commit 3ead49b9ed7f643e7ba2db2077993eb515f10e38
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 21:54:05 2024 -0500
cleanup
commit be2beb37fd35b46a5b4d60f39c8040640dfc7800
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 21:20:29 2024 -0500
rename request field, clean up Watcher args
commit d392a974a1cbba119abc6a9e99e54d45a0cf0d62
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 18:38:21 2024 -0500
...
commit 70ee37dbafc6c4e77a62aac44f11747c0f6bfc25
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 18:37:08 2024 -0500
use tagLastChangeEvent for min_event=last
commit 82d0d77679afc163bb5c36e43f834c109d7e6371
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 18:33:04 2024 -0500
tag_last_change_event: support inheritance
commit c3c87f8ccf4feea321d9bfa54cc1f223431a8d13
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 17:55:10 2024 -0500
waitrepo anon mode (no request)
commit c6994353d8daa4cb615eae4dde0368b97ea33d18
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 09:32:39 2024 -0500
don't reuse a request for a future event
commit 22abfadc57adcf11229336eede6459585a293da6
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 09:16:47 2024 -0500
...
commit c7b899c4a62d667d96e8320b6fa96106972f5859
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 09:10:22 2024 -0500
...
commit a185fd86766c283fd9c18a4d95546a8e36fd21c9
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 09:08:31 2024 -0500
...
commit 87401bddac38ebb658f2e9e4fbe36af2e6010e42
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 09:06:48 2024 -0500
...
commit bb72bd0e2d78f2d21168144a976e772473efeb16
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 08:59:44 2024 -0500
...
commit 4dbeb0edfa55cf39f4c897b3c15345e2daf9dad6
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 08:59:10 2024 -0500
...
commit 994e13d538d580ea9f7499310b8a0e4cd841af07
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 08:57:22 2024 -0500
...
commit 1fee9331e72e4d48eccfd640183563a909181af6
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 08:53:06 2024 -0500
...
commit e74eea41048a5ec6f4a9c52025c2e452f640a808
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 00:57:11 2024 -0500
...
commit ec1a581ba23b292ab840b740dabd1f3e4854fe33
Author: Mike McLean <mikem@redhat.com>
Date: Sat Feb 17 00:48:48 2024 -0500
attempting to wire this up into newRepo and waitrepo task
commit 7eee457230a2b0e6aa9b974e94e4ca516227a196
Author: Mike McLean <mikem@redhat.com>
Date: Fri Feb 16 18:58:18 2024 -0500
...
commit 1c719d642da5f5c2ca0b7ce9af170054767423c6
Author: Mike McLean <mikem@redhat.com>
Date: Fri Feb 16 18:56:11 2024 -0500
adjust checkRepoRequest return
commit e6e5f15961c7801b1777743b799fbe2c96a08138
Author: Mike McLean <mikem@redhat.com>
Date: Fri Feb 16 18:00:27 2024 -0500
handle repo requests in scheduler loop
commit a0dde4e3625110671bcea7abbdab0f0c03142cbc
Author: Mike McLean <mikem@redhat.com>
Date: Fri Feb 16 11:06:00 2024 -0500
tweak repo report in taginfo cli
commit 2d860a17caf770507c67a89ac234d17c200c30ab
Author: Mike McLean <mikem@redhat.com>
Date: Fri Feb 16 10:46:13 2024 -0500
enable/clarify new repo fields
commit 7204ce3753450981300bf78102fc40f1b41786b4
Author: Mike McLean <mikem@redhat.com>
Date: Fri Feb 16 09:38:59 2024 -0500
syntax
commit 96236f4ef93e5babeb0800b5b4a16117a3e8c1df
Author: Mike McLean <mikem@redhat.com>
Date: Fri Feb 16 10:20:34 2024 -0500
pull tag_last_change_event and repo fields from multirepo branch
commit a707c19eda9bc6efc22ce004367cbee960fcccb6
Author: Mike McLean <mikem@redhat.com>
Date: Fri Feb 16 09:26:07 2024 -0500
partial: check_repo_queue
commit a208d128e60bdb4ad531938d55b2c793b65ab24b
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 15 19:35:03 2024 -0500
...
commit e9a601059fb9ceb89ec9b84680afd6dc276424f9
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 15 19:22:55 2024 -0500
...
commit 067e385861766d7a355d5671a1e1e73ebd737b97
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 15 19:14:11 2024 -0500
use RepoView more
commit e5b4a58b65c6f195f724fb135acea6dd18abc3c2
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 15 17:37:47 2024 -0500
executeOne
commit 45aecfeb0a32c097fc65574296958573e6405009
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 15 17:29:06 2024 -0500
...
commit 41314dc10c3a1a13f39628de5caedc7486193c7b
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 15 17:27:40 2024 -0500
only return one req
commit c44ed9e4e3bc349e4107df79847049503a2c75be
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 15 14:57:11 2024 -0500
...
commit cfd60878ada8196616fd401fb6cbaf7aa2dcc98b
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 15 11:10:31 2024 -0500
...
commit 11f65335ca9c6167b8f457460a58471c37ae4098
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 15 09:12:34 2024 -0500
testing
commit c05f8f3b3f64c3aeef5ff0296dc181123c756952
Author: Mike McLean <mikem@redhat.com>
Date: Wed Feb 14 22:52:14 2024 -0500
flesh out stub
commit fd9c57c2c95bb5a1bd051d9d1e7e73e2f3fcb9b0
Author: Mike McLean <mikem@redhat.com>
Date: Wed Feb 14 22:26:19 2024 -0500
...
commit d59f38a5adc90607556a1671c85b808209389edd
Author: Mike McLean <mikem@redhat.com>
Date: Tue Feb 6 22:19:36 2024 -0500
more fragments
commit 2d1b45c66e1cc3f41f6812b7b6d4bd66c4acf419
Author: Mike McLean <mikem@redhat.com>
Date: Tue Feb 6 20:38:04 2024 -0500
XXX DEBUG CODE
commit d8e3a4bd205acb5ec1940fa30e29701f0a358d51
Author: Mike McLean <mikem@redhat.com>
Date: Tue Feb 6 20:37:52 2024 -0500
...
commit 0744a29bd303bf9b381aa48e3e5dd98e8b7373ef
Author: Mike McLean <mikem@redhat.com>
Date: Tue Feb 6 20:37:40 2024 -0500
...
commit 0726f8d22b227e002f7ddd927829a1e3ec66681f
Author: Mike McLean <mikem@redhat.com>
Date: Tue Feb 6 20:27:22 2024 -0500
RepoWatcher stub
commit a74a74ef9688b1d27b528dd8e2de8ff3b63f97ae
Author: Mike McLean <mikem@redhat.com>
Date: Tue Feb 6 00:05:49 2024 -0500
...
commit d68c2902015a4998f59355aa224924e5ace21b0a
Author: Mike McLean <mikem@redhat.com>
Date: Mon Feb 5 08:18:56 2024 -0500
...
commit ff8538344e1bf24d7b94ad45f26fb1548be4782d
Author: Mike McLean <mikem@redhat.com>
Date: Fri Feb 2 00:00:41 2024 -0500
partial
commit f618ed321108e0094ab95e054cb5d53fb2e0dfe1
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 1 23:54:57 2024 -0500
tweak unit test
commit 208a2f441401cefd65a7a92d91b6b76bf5dd97d3
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 1 22:52:37 2024 -0500
comments
commit 8fe5b4f0d773f190c037ab95520623a3d250c069
Author: Mike McLean <mikem@redhat.com>
Date: Thu Feb 1 01:43:28 2024 -0500
repo_queue stub
1241 lines
42 KiB
Python
1241 lines
42 KiB
Python
# python library
|
|
|
|
# db utilities for koji
|
|
# Copyright (c) 2005-2014 Red Hat, Inc.
|
|
#
|
|
# Koji is free software; you can redistribute it and/or
|
|
# modify it under the terms of the GNU Lesser General Public
|
|
# License as published by the Free Software Foundation;
|
|
# version 2.1 of the License.
|
|
#
|
|
# This software is distributed in the hope that it will be useful,
|
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
# Lesser General Public License for more details.
|
|
#
|
|
# You should have received a copy of the GNU Lesser General Public
|
|
# License along with this software; if not, write to the Free Software
|
|
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
|
#
|
|
# Authors:
|
|
# Mike McLean <mikem@redhat.com>
|
|
|
|
|
|
from __future__ import absolute_import
|
|
|
|
import logging
|
|
import koji
|
|
import os
|
|
# import psycopg2.extensions
|
|
# # don't convert timestamp fields to DateTime objects
|
|
# del psycopg2.extensions.string_types[1114]
|
|
# del psycopg2.extensions.string_types[1184]
|
|
# del psycopg2.extensions.string_types[1082]
|
|
# del psycopg2.extensions.string_types[1083]
|
|
# del psycopg2.extensions.string_types[1266]
|
|
import re
|
|
import sys
|
|
import time
|
|
import traceback
|
|
|
|
import psycopg2
|
|
|
|
import koji.context
|
|
context = koji.context.context
|
|
|
|
|
|
POSITIONAL_RE = re.compile(r'%[a-z]')
|
|
NAMED_RE = re.compile(r'%\(([^\)]+)\)[a-z]')
|
|
|
|
## Globals ##
|
|
_DBopts = None
|
|
# A persistent connection to the database.
|
|
# A new connection will be created whenever
|
|
# Apache forks a new worker, and that connection
|
|
# will be used to service all requests handled
|
|
# by that worker.
|
|
# This probably doesn't need to be a ThreadLocal
|
|
# since Apache is not using threading,
|
|
# but play it safe anyway.
|
|
_DBconn = koji.context.ThreadLocal()
|
|
|
|
logger = logging.getLogger('koji.db')
|
|
|
|
|
|
class DBWrapper:
|
|
def __init__(self, cnx):
|
|
self.cnx = cnx
|
|
|
|
def __getattr__(self, key):
|
|
if not self.cnx:
|
|
raise Exception('connection is closed')
|
|
return getattr(self.cnx, key)
|
|
|
|
def cursor(self, *args, **kw):
|
|
if not self.cnx:
|
|
raise Exception('connection is closed')
|
|
return CursorWrapper(self.cnx.cursor(*args, **kw))
|
|
|
|
def close(self):
|
|
# Rollback any uncommitted changes and clear the connection so
|
|
# this DBWrapper is no longer usable after close()
|
|
if not self.cnx:
|
|
raise Exception('connection is closed')
|
|
self.cnx.cursor().execute('ROLLBACK')
|
|
# We do this rather than cnx.rollback to avoid opening a new transaction
|
|
# If our connection gets recycled cnx.rollback will be called then.
|
|
self.cnx = None
|
|
|
|
|
|
class CursorWrapper:
|
|
def __init__(self, cursor):
|
|
self.cursor = cursor
|
|
self.logger = logging.getLogger('koji.db')
|
|
|
|
def __getattr__(self, key):
|
|
return getattr(self.cursor, key)
|
|
|
|
def _timed_call(self, method, args, kwargs):
|
|
start = time.time()
|
|
ret = getattr(self.cursor, method)(*args, **kwargs)
|
|
self.logger.debug("%s operation completed in %.4f seconds", method, time.time() - start)
|
|
return ret
|
|
|
|
def fetchone(self, *args, **kwargs):
|
|
return self._timed_call('fetchone', args, kwargs)
|
|
|
|
def fetchall(self, *args, **kwargs):
|
|
return self._timed_call('fetchall', args, kwargs)
|
|
|
|
def quote(self, operation, parameters):
|
|
if hasattr(self.cursor, "mogrify"):
|
|
quote = self.cursor.mogrify
|
|
else:
|
|
def quote(a, b):
|
|
return a % b
|
|
try:
|
|
sql = quote(operation, parameters)
|
|
if isinstance(sql, bytes):
|
|
try:
|
|
sql = koji.util.decode_bytes(sql)
|
|
except Exception:
|
|
pass
|
|
return sql
|
|
except Exception:
|
|
self.logger.exception(
|
|
'Unable to quote query:\n%s\nParameters: %s', operation, parameters)
|
|
return "INVALID QUERY"
|
|
|
|
def preformat(self, sql, params):
|
|
"""psycopg2 requires all variable placeholders to use the string (%s) datatype,
|
|
regardless of the actual type of the data. Format the sql string to be compliant.
|
|
It also requires IN parameters to be in tuple rather than list format."""
|
|
sql = POSITIONAL_RE.sub(r'%s', sql)
|
|
sql = NAMED_RE.sub(r'%(\1)s', sql)
|
|
if isinstance(params, dict):
|
|
for name, value in params.items():
|
|
if isinstance(value, list):
|
|
params[name] = tuple(value)
|
|
else:
|
|
if isinstance(params, tuple):
|
|
params = list(params)
|
|
for i, item in enumerate(params):
|
|
if isinstance(item, list):
|
|
params[i] = tuple(item)
|
|
return sql, params
|
|
|
|
def execute(self, operation, parameters=(), log_errors=True):
|
|
debug = self.logger.isEnabledFor(logging.DEBUG)
|
|
operation, parameters = self.preformat(operation, parameters)
|
|
if debug:
|
|
self.logger.debug(self.quote(operation, parameters))
|
|
start = time.time()
|
|
try:
|
|
ret = self.cursor.execute(operation, parameters)
|
|
except Exception:
|
|
if log_errors:
|
|
self.logger.error('Query failed. Query was: %s', self.quote(operation, parameters))
|
|
raise
|
|
if debug:
|
|
self.logger.debug("Execute operation completed in %.4f seconds", time.time() - start)
|
|
return ret
|
|
|
|
|
|
## Functions ##
|
|
def provideDBopts(**opts):
|
|
global _DBopts
|
|
if _DBopts is None:
|
|
_DBopts = dict([i for i in opts.items() if i[1] is not None])
|
|
|
|
|
|
def setDBopts(**opts):
|
|
global _DBopts
|
|
_DBopts = opts
|
|
|
|
|
|
def getDBopts():
|
|
return _DBopts
|
|
|
|
|
|
def connect():
|
|
logger = logging.getLogger('koji.db')
|
|
global _DBconn
|
|
if hasattr(_DBconn, 'conn'):
|
|
# Make sure the previous transaction has been
|
|
# closed. This is safe to call multiple times.
|
|
conn = _DBconn.conn
|
|
try:
|
|
# Under normal circumstances, the last use of this connection
|
|
# will have issued a raw ROLLBACK to close the transaction. To
|
|
# avoid 'no transaction in progress' warnings (depending on postgres
|
|
# configuration) we open a new one here.
|
|
# Should there somehow be a transaction in progress, a second
|
|
# BEGIN will be a harmless no-op, though there may be a warning.
|
|
conn.cursor().execute('BEGIN')
|
|
conn.rollback()
|
|
return DBWrapper(conn)
|
|
except psycopg2.Error:
|
|
del _DBconn.conn
|
|
# create a fresh connection
|
|
opts = _DBopts
|
|
if opts is None:
|
|
opts = {}
|
|
try:
|
|
if 'dsn' in opts:
|
|
conn = psycopg2.connect(dsn=opts['dsn'])
|
|
else:
|
|
conn = psycopg2.connect(**opts)
|
|
conn.set_client_encoding('UTF8')
|
|
except Exception:
|
|
logger.error(''.join(traceback.format_exception(*sys.exc_info())))
|
|
raise
|
|
# XXX test
|
|
# return conn
|
|
_DBconn.conn = conn
|
|
|
|
return DBWrapper(conn)
|
|
|
|
|
|
def _dml(operation, values, log_errors=True):
|
|
"""Run an insert, update, or delete. Return number of rows affected
|
|
If log is False, errors will not be logged. It makes sense only for
|
|
queries which are expected to fail (LOCK NOWAIT)
|
|
"""
|
|
c = context.cnx.cursor()
|
|
c.execute(operation, values, log_errors=log_errors)
|
|
ret = c.rowcount
|
|
logger.debug("Operation affected %s row(s)", ret)
|
|
c.close()
|
|
context.commit_pending = True
|
|
return ret
|
|
|
|
|
|
def _fetchMulti(query, values):
|
|
"""Run the query and return all rows"""
|
|
c = context.cnx.cursor()
|
|
c.execute(query, values)
|
|
results = c.fetchall()
|
|
c.close()
|
|
return results
|
|
|
|
|
|
def _fetchSingle(query, values, strict=False):
|
|
"""Run the query and return a single row
|
|
|
|
If strict is true, raise an error if the query returns more or less than
|
|
one row."""
|
|
results = _fetchMulti(query, values)
|
|
numRows = len(results)
|
|
if numRows == 0:
|
|
if strict:
|
|
raise koji.GenericError('query returned no rows')
|
|
else:
|
|
return None
|
|
elif strict and numRows > 1:
|
|
raise koji.GenericError('multiple rows returned for a single row query')
|
|
else:
|
|
return results[0]
|
|
|
|
|
|
def _singleValue(query, values=None, strict=True):
|
|
"""Perform a query that returns a single value.
|
|
|
|
Note that unless strict is True a return value of None could mean either
|
|
a single NULL value or zero rows returned."""
|
|
if values is None:
|
|
values = {}
|
|
row = _fetchSingle(query, values, strict)
|
|
if row:
|
|
if strict and len(row) > 1:
|
|
raise koji.GenericError('multiple fields returned for a single value query')
|
|
return row[0]
|
|
else:
|
|
# don't need to check strict here, since that was already handled by _singleRow()
|
|
return None
|
|
|
|
|
|
def _multiRow(query, values, fields):
|
|
"""Return all rows from "query". Named query parameters
|
|
can be specified using the "values" map. Results will be returned
|
|
as a list of maps. Each map in the list will have a key for each
|
|
element in the "fields" list. If there are no results, an empty
|
|
list will be returned."""
|
|
return [dict(zip(fields, row)) for row in _fetchMulti(query, values)]
|
|
|
|
|
|
def _singleRow(query, values, fields, strict=False):
|
|
"""Return a single row from "query". Named parameters can be
|
|
specified using the "values" map. The result will be returned as
|
|
as map. The map will have a key for each element in the "fields"
|
|
list. If more than one row is returned and "strict" is true, a
|
|
GenericError will be raised. If no rows are returned, and "strict"
|
|
is True, a GenericError will be raised. Otherwise None will be
|
|
returned."""
|
|
row = _fetchSingle(query, values, strict)
|
|
if row:
|
|
return dict(zip(fields, row))
|
|
else:
|
|
# strict enforced by _fetchSingle
|
|
return None
|
|
|
|
|
|
def get_event():
|
|
"""Get an event id for this transaction
|
|
|
|
We cache the result in context, so subsequent calls in the same transaction will
|
|
get the same event.
|
|
|
|
This cache is cleared between the individual calls in a multicall.
|
|
See: https://pagure.io/koji/pull-request/74
|
|
"""
|
|
if hasattr(context, 'event_id'):
|
|
return context.event_id
|
|
event_id = _singleValue("SELECT get_event()")
|
|
context.event_id = event_id
|
|
return event_id
|
|
|
|
|
|
def nextval(sequence):
|
|
"""Get the next value for the given sequence"""
|
|
data = {'sequence': sequence}
|
|
return _singleValue("SELECT nextval(%(sequence)s)", data, strict=True)
|
|
|
|
|
|
def currval(sequence):
|
|
"""Get the current value for the given sequence"""
|
|
data = {'sequence': sequence}
|
|
return _singleValue("SELECT currval(%(sequence)s)", data, strict=True)
|
|
|
|
|
|
def db_lock(name, wait=True):
|
|
"""Obtain lock for name
|
|
|
|
The named lock must exist in the locks table
|
|
|
|
:param string name: the lock name
|
|
:param bool wait: whether to wait for the lock (default: True)
|
|
:return: True if locked, False otherwise
|
|
|
|
This function is implemented using db row locks and the locks table
|
|
"""
|
|
# attempt to lock the row
|
|
data = {"name": name}
|
|
if wait:
|
|
query = "SELECT name FROM locks WHERE name=%(name)s FOR UPDATE"
|
|
else:
|
|
# using SKIP LOCKED rather than NOWAIT to avoid error messages
|
|
query = "SELECT name FROM locks WHERE name=%(name)s FOR UPDATE SKIP LOCKED"
|
|
rows = _fetchMulti(query, data)
|
|
|
|
if rows:
|
|
# we have the lock
|
|
return True
|
|
|
|
if not wait:
|
|
# in the no-wait case, this could mean either that the row is already locked, or that
|
|
# the lock does not exist, so we check
|
|
query = "SELECT name FROM locks WHERE name=%(name)s"
|
|
rows = _fetchMulti(query, data)
|
|
if rows:
|
|
# the lock exists, but we did not acquire it
|
|
return False
|
|
|
|
# otherwise, the lock does not exist
|
|
raise koji.LockError(f"Lock not defined: {name}")
|
|
|
|
|
|
class Savepoint(object):
|
|
|
|
def __init__(self, name):
|
|
self.name = name
|
|
_dml("SAVEPOINT %s" % name, {})
|
|
|
|
def rollback(self):
|
|
_dml("ROLLBACK TO SAVEPOINT %s" % self.name, {})
|
|
|
|
|
|
class InsertProcessor(object):
|
|
"""Build an insert statement
|
|
|
|
table - the table to insert into
|
|
data - a dictionary of data to insert (keys = row names)
|
|
rawdata - data to insert specified as sql expressions rather than python values
|
|
|
|
does not support query inserts of "DEFAULT VALUES"
|
|
"""
|
|
|
|
def __init__(self, table, data=None, rawdata=None):
|
|
self.table = table
|
|
self.data = {}
|
|
if data:
|
|
self.data.update(data)
|
|
self.rawdata = {}
|
|
if rawdata:
|
|
self.rawdata.update(rawdata)
|
|
|
|
def __str__(self):
|
|
if not self.data and not self.rawdata:
|
|
return "-- incomplete update: no assigns"
|
|
parts = ['INSERT INTO %s ' % self.table]
|
|
columns = sorted(list(self.data.keys()) + list(self.rawdata.keys()))
|
|
parts.append("(%s) " % ', '.join(columns))
|
|
values = []
|
|
for key in columns:
|
|
if key in self.data:
|
|
values.append("%%(%s)s" % key)
|
|
else:
|
|
values.append("(%s)" % self.rawdata[key])
|
|
parts.append("VALUES (%s)" % ', '.join(values))
|
|
return ''.join(parts)
|
|
|
|
def __repr__(self):
|
|
return "<InsertProcessor: %r>" % vars(self)
|
|
|
|
def set(self, **kwargs):
|
|
"""Set data via keyword args"""
|
|
self.data.update(kwargs)
|
|
|
|
def rawset(self, **kwargs):
|
|
"""Set rawdata via keyword args"""
|
|
self.rawdata.update(kwargs)
|
|
|
|
def make_create(self, event_id=None, user_id=None):
|
|
if event_id is None:
|
|
event_id = get_event()
|
|
if user_id is None:
|
|
context.session.assertLogin()
|
|
user_id = context.session.user_id
|
|
self.data['create_event'] = event_id
|
|
self.data['creator_id'] = user_id
|
|
|
|
def dup_check(self):
|
|
"""Check to see if the insert duplicates an existing row"""
|
|
if self.rawdata:
|
|
logger.warning("Can't perform duplicate check")
|
|
return None
|
|
data = self.data.copy()
|
|
if 'create_event' in self.data:
|
|
# versioned table
|
|
data['active'] = True
|
|
del data['create_event']
|
|
del data['creator_id']
|
|
clauses = ["%s = %%(%s)s" % (k, k) for k in data]
|
|
query = QueryProcessor(columns=list(data.keys()), tables=[self.table],
|
|
clauses=clauses, values=data)
|
|
if query.execute():
|
|
return True
|
|
return False
|
|
|
|
def execute(self):
|
|
return _dml(str(self), self.data)
|
|
|
|
|
|
class UpsertProcessor(InsertProcessor):
|
|
"""Build a basic upsert statement
|
|
|
|
table - the table to insert into
|
|
data - a dictionary of data to insert (keys = row names)
|
|
rawdata - data to insert specified as sql expressions rather than python values
|
|
keys - the rows that are the unique keys
|
|
skip_dup - if set to true, do nothing on conflict
|
|
"""
|
|
|
|
def __init__(self, table, data=None, rawdata=None, keys=None, skip_dup=False):
|
|
super(UpsertProcessor, self).__init__(table, data=data, rawdata=rawdata)
|
|
self.keys = keys
|
|
self.skip_dup = skip_dup
|
|
if not keys and not skip_dup:
|
|
raise ValueError('either keys or skip_dup must be set')
|
|
|
|
def __repr__(self):
|
|
return "<UpsertProcessor: %r>" % vars(self)
|
|
|
|
def __str__(self):
|
|
insert = super(UpsertProcessor, self).__str__()
|
|
parts = [insert]
|
|
if self.skip_dup:
|
|
parts.append(' ON CONFLICT DO NOTHING')
|
|
else:
|
|
parts.append(f' ON CONFLICT ({",".join(self.keys)}) DO UPDATE SET ')
|
|
# filter out conflict keys from data
|
|
data = {k: self.data[k] for k in self.data if k not in self.keys}
|
|
rawdata = {k: self.rawdata[k] for k in self.rawdata if k not in self.keys}
|
|
assigns = [f"{key} = %({key})s" for key in data]
|
|
assigns.extend([f"{key} = ({rawdata[key]})" for key in self.rawdata])
|
|
parts.append(', '.join(sorted(assigns)))
|
|
return ''.join(parts)
|
|
|
|
|
|
class UpdateProcessor(object):
|
|
"""Build an update statement
|
|
|
|
table - the table to insert into
|
|
data - a dictionary of data to insert (keys = row names)
|
|
rawdata - data to insert specified as sql expressions rather than python values
|
|
clauses - a list of where clauses which will be ANDed together
|
|
values - dict of values used in clauses
|
|
|
|
does not support the FROM clause
|
|
"""
|
|
|
|
def __init__(self, table, data=None, rawdata=None, clauses=None, values=None):
|
|
self.table = table
|
|
self.data = {}
|
|
if data:
|
|
self.data.update(data)
|
|
self.rawdata = {}
|
|
if rawdata:
|
|
self.rawdata.update(rawdata)
|
|
self.clauses = []
|
|
if clauses:
|
|
self.clauses.extend(clauses)
|
|
self.values = {}
|
|
if values:
|
|
self.values.update(values)
|
|
|
|
def __str__(self):
|
|
if not self.data and not self.rawdata:
|
|
return "-- incomplete update: no assigns"
|
|
parts = ['UPDATE %s SET ' % self.table]
|
|
assigns = ["%s = %%(data.%s)s" % (key, key) for key in self.data]
|
|
assigns.extend(["%s = (%s)" % (key, self.rawdata[key]) for key in self.rawdata])
|
|
parts.append(', '.join(sorted(assigns)))
|
|
if self.clauses:
|
|
parts.append('\nWHERE ')
|
|
parts.append(' AND '.join(["( %s )" % c for c in sorted(self.clauses)]))
|
|
return ''.join(parts)
|
|
|
|
def __repr__(self):
|
|
return "<UpdateProcessor: %r>" % vars(self)
|
|
|
|
def get_values(self):
|
|
"""Returns unified values dict, including data"""
|
|
ret = {}
|
|
ret.update(self.values)
|
|
for key in self.data:
|
|
ret["data." + key] = self.data[key]
|
|
return ret
|
|
|
|
def set(self, **kwargs):
|
|
"""Set data via keyword args"""
|
|
self.data.update(kwargs)
|
|
|
|
def rawset(self, **kwargs):
|
|
"""Set rawdata via keyword args"""
|
|
self.rawdata.update(kwargs)
|
|
|
|
def make_revoke(self, event_id=None, user_id=None):
|
|
"""Add standard revoke options to the update"""
|
|
if event_id is None:
|
|
event_id = get_event()
|
|
if user_id is None:
|
|
context.session.assertLogin()
|
|
user_id = context.session.user_id
|
|
self.data['revoke_event'] = event_id
|
|
self.data['revoker_id'] = user_id
|
|
self.rawdata['active'] = 'NULL'
|
|
self.clauses.append('active = TRUE')
|
|
|
|
def execute(self):
|
|
return _dml(str(self), self.get_values())
|
|
|
|
|
|
class DeleteProcessor(object):
|
|
"""Build an delete statement
|
|
|
|
table - the table to delete
|
|
clauses - a list of where clauses which will be ANDed together
|
|
values - dict of values used in clauses
|
|
"""
|
|
|
|
def __init__(self, table, clauses=None, values=None):
|
|
self.table = table
|
|
self.clauses = []
|
|
if clauses:
|
|
self.clauses.extend(clauses)
|
|
self.values = {}
|
|
if values:
|
|
self.values.update(values)
|
|
|
|
def __str__(self):
|
|
parts = ['DELETE FROM %s ' % self.table]
|
|
if self.clauses:
|
|
parts.append('\nWHERE ')
|
|
parts.append(' AND '.join(["( %s )" % c for c in sorted(self.clauses)]))
|
|
return ''.join(parts)
|
|
|
|
def __repr__(self):
|
|
return "<DeleteProcessor: %r>" % vars(self)
|
|
|
|
def get_values(self):
|
|
"""Returns unified values dict, including data"""
|
|
ret = {}
|
|
ret.update(self.values)
|
|
return ret
|
|
|
|
def execute(self):
|
|
return _dml(str(self), self.get_values())
|
|
|
|
|
|
class QueryProcessor(object):
|
|
"""
|
|
Build a query from its components.
|
|
- columns, aliases, tables: lists of the column names to retrieve,
|
|
the tables to retrieve them from, and the key names to use when
|
|
returning values as a map, respectively
|
|
- joins: a list of joins in the form 'table1 ON table1.col1 = table2.col2', 'JOIN' will be
|
|
prepended automatically; if extended join syntax (LEFT, OUTER, etc.) is required,
|
|
it can be specified, and 'JOIN' will not be prepended
|
|
- clauses: a list of where clauses in the form 'table1.col1 OPER table2.col2-or-variable';
|
|
each clause will be surrounded by parentheses and all will be AND'ed together
|
|
- values: the map that will be used to replace any substitution expressions in the query
|
|
- transform: a function that will be called on each row (not compatible with
|
|
countOnly or singleValue)
|
|
- opts: a map of query options; currently supported options are:
|
|
countOnly: if True, return an integer indicating how many results would have been
|
|
returned, rather than the actual query results
|
|
order: a column or alias name to use in the 'ORDER BY' clause
|
|
offset: an integer to use in the 'OFFSET' clause
|
|
limit: an integer to use in the 'LIMIT' clause
|
|
asList: if True, return results as a list of lists, where each list contains the
|
|
column values in query order, rather than the usual list of maps
|
|
rowlock: if True, use "FOR UPDATE" to lock the queried rows
|
|
group: a column or alias name to use in the 'GROUP BY' clause
|
|
(controlled by enable_group)
|
|
- enable_group: if True, opts.group will be enabled
|
|
- order_map: (optional) a name:expression map of allowed orders. Otherwise any column or alias
|
|
is allowed
|
|
"""
|
|
|
|
iterchunksize = 1000
|
|
|
|
def __init__(self, columns=None, aliases=None, tables=None,
|
|
joins=None, clauses=None, values=None, transform=None,
|
|
opts=None, enable_group=False, order_map=None):
|
|
self.columns = columns
|
|
self.aliases = aliases
|
|
if columns and aliases:
|
|
if len(columns) != len(aliases):
|
|
raise Exception('column and alias lists must be the same length')
|
|
# reorder
|
|
alias_table = sorted(zip(aliases, columns))
|
|
self.aliases = [x[0] for x in alias_table]
|
|
self.columns = [x[1] for x in alias_table]
|
|
self.colsByAlias = dict(alias_table)
|
|
else:
|
|
self.colsByAlias = {}
|
|
if columns:
|
|
self.columns = sorted(columns)
|
|
if aliases:
|
|
self.aliases = sorted(aliases)
|
|
self.tables = tables
|
|
self.joins = joins
|
|
if clauses:
|
|
self.clauses = sorted(clauses)
|
|
else:
|
|
self.clauses = clauses
|
|
self.cursors = 0
|
|
if values:
|
|
self.values = values
|
|
else:
|
|
self.values = {}
|
|
self.transform = transform
|
|
if opts:
|
|
self.opts = opts
|
|
else:
|
|
self.opts = {}
|
|
self.order_map = order_map
|
|
self.enable_group = enable_group
|
|
self.logger = logging.getLogger('koji.db')
|
|
|
|
def countOnly(self, count):
|
|
self.opts['countOnly'] = count
|
|
|
|
def __str__(self):
|
|
query = \
|
|
"""
|
|
SELECT %(col_str)s
|
|
FROM %(table_str)s
|
|
%(join_str)s
|
|
%(clause_str)s
|
|
%(group_str)s
|
|
%(order_str)s
|
|
%(offset_str)s
|
|
%(limit_str)s
|
|
"""
|
|
if self.opts.get('countOnly'):
|
|
if self.opts.get('offset') \
|
|
or self.opts.get('limit') \
|
|
or (self.enable_group and self.opts.get('group')):
|
|
# If we're counting with an offset and/or limit, we need
|
|
# to wrap the offset/limited query and then count the results,
|
|
# rather than trying to offset/limit the single row returned
|
|
# by count(*). Because we're wrapping the query, we don't care
|
|
# about the column values.
|
|
col_str = '1'
|
|
else:
|
|
col_str = 'count(*)'
|
|
else:
|
|
col_str = self._seqtostr(self.columns)
|
|
table_str = self._seqtostr(self.tables, sort=True)
|
|
join_str = self._joinstr()
|
|
clause_str = self._seqtostr(self.clauses, sep=')\n AND (')
|
|
if clause_str:
|
|
clause_str = ' WHERE (' + clause_str + ')'
|
|
if self.enable_group:
|
|
group_str = self._group()
|
|
else:
|
|
group_str = ''
|
|
order_str = self._order()
|
|
offset_str = self._optstr('offset')
|
|
limit_str = self._optstr('limit')
|
|
|
|
query = query % locals()
|
|
if self.opts.get('countOnly') and \
|
|
(self.opts.get('offset') or
|
|
self.opts.get('limit') or
|
|
(self.enable_group and self.opts.get('group'))):
|
|
query = 'SELECT count(*)\nFROM (' + query + ') numrows'
|
|
if self.opts.get('rowlock'):
|
|
query += '\n FOR UPDATE'
|
|
return query
|
|
|
|
def __repr__(self):
|
|
return '<QueryProcessor: ' \
|
|
'columns=%r, aliases=%r, tables=%r, joins=%r, clauses=%r, values=%r, opts=%r>' % \
|
|
(self.columns, self.aliases, self.tables, self.joins, self.clauses, self.values,
|
|
self.opts)
|
|
|
|
def _seqtostr(self, seq, sep=', ', sort=False):
|
|
if seq:
|
|
if sort:
|
|
seq = sorted(seq)
|
|
return sep.join(seq)
|
|
else:
|
|
return ''
|
|
|
|
def _joinstr(self):
|
|
if not self.joins:
|
|
return ''
|
|
result = ''
|
|
for join in self.joins:
|
|
if result:
|
|
result += '\n'
|
|
if re.search(r'\bjoin\b', join, re.IGNORECASE):
|
|
# The join clause already contains the word 'join',
|
|
# so don't prepend 'JOIN' to it
|
|
result += ' ' + join
|
|
else:
|
|
result += ' JOIN ' + join
|
|
return result
|
|
|
|
def _order(self):
|
|
# Don't bother sorting if we're just counting
|
|
if self.opts.get('countOnly'):
|
|
return ''
|
|
order_opt = self.opts.get('order')
|
|
if order_opt:
|
|
order_exprs = []
|
|
for order in order_opt.split(','):
|
|
if order.startswith('-'):
|
|
order = order[1:]
|
|
direction = ' DESC'
|
|
else:
|
|
direction = ''
|
|
# Check if we're ordering by alias first
|
|
if self.order_map is not None:
|
|
# order should only be a key in the map
|
|
expr = self.order_map.get(order)
|
|
if not expr:
|
|
raise koji.ParameterError(f'Invalid order term: {order}')
|
|
else:
|
|
expr = self.colsByAlias.get(order)
|
|
if not expr:
|
|
if order in self.columns:
|
|
expr = order
|
|
else:
|
|
raise Exception('Invalid order: ' + order)
|
|
order_exprs.append(expr + direction)
|
|
return 'ORDER BY ' + ', '.join(order_exprs)
|
|
else:
|
|
return ''
|
|
|
|
def _group(self):
|
|
group_opt = self.opts.get('group')
|
|
if group_opt:
|
|
group_exprs = []
|
|
for group in group_opt.split(','):
|
|
if group:
|
|
group_exprs.append(group)
|
|
return 'GROUP BY ' + ', '.join(group_exprs)
|
|
else:
|
|
return ''
|
|
|
|
def _optstr(self, optname):
|
|
optval = self.opts.get(optname)
|
|
if optval:
|
|
return '%s %i' % (optname.upper(), optval)
|
|
else:
|
|
return ''
|
|
|
|
def singleValue(self, strict=True):
|
|
# self.transform not applied here
|
|
return _singleValue(str(self), self.values, strict=strict)
|
|
|
|
def execute(self):
|
|
query = str(self)
|
|
if self.opts.get('countOnly'):
|
|
return _singleValue(query, self.values, strict=True)
|
|
elif self.opts.get('asList'):
|
|
if self.transform is None:
|
|
return _fetchMulti(query, self.values)
|
|
else:
|
|
# if we're transforming, generate the dicts so the transform can modify
|
|
fields = self.aliases or self.columns
|
|
data = _multiRow(query, self.values, fields)
|
|
data = [self.transform(row) for row in data]
|
|
# and then convert back to lists
|
|
data = [[row[f] for f in fields] for row in data]
|
|
return data
|
|
else:
|
|
data = _multiRow(query, self.values, (self.aliases or self.columns))
|
|
if self.transform is not None:
|
|
data = [self.transform(row) for row in data]
|
|
return data
|
|
|
|
def iterate(self):
|
|
if self.opts.get('countOnly'):
|
|
return self.execute()
|
|
elif self.opts.get('limit') and self.opts['limit'] < self.iterchunksize:
|
|
return self.execute()
|
|
else:
|
|
fields = self.aliases or self.columns
|
|
fields = list(fields)
|
|
cname = "qp_cursor_%s_%i_%i" % (id(self), os.getpid(), self.cursors)
|
|
self.cursors += 1
|
|
self.logger.debug('Setting up query iterator. cname=%r', cname)
|
|
return self._iterate(cname, str(self), self.values.copy(), fields,
|
|
self.iterchunksize, self.opts.get('asList'))
|
|
|
|
def _iterate(self, cname, query, values, fields, chunksize, as_list=False):
|
|
# We pass all this data into the generator so that the iterator works
|
|
# from the snapshot when it was generated. Otherwise reuse of the processor
|
|
# for similar queries could have unpredictable results.
|
|
query = "DECLARE %s NO SCROLL CURSOR FOR %s" % (cname, query)
|
|
c = context.cnx.cursor()
|
|
c.execute(query, values)
|
|
c.close()
|
|
try:
|
|
query = "FETCH %i FROM %s" % (chunksize, cname)
|
|
while True:
|
|
if as_list:
|
|
if self.transform is None:
|
|
buf = _fetchMulti(query, {})
|
|
else:
|
|
# if we're transforming, generate the dicts so the transform can modify
|
|
buf = _multiRow(query, self.values, fields)
|
|
buf = [self.transform(row) for row in buf]
|
|
# and then convert back to lists
|
|
buf = [[row[f] for f in fields] for row in buf]
|
|
else:
|
|
buf = _multiRow(query, {}, fields)
|
|
if self.transform is not None:
|
|
buf = [self.transform(row) for row in buf]
|
|
if not buf:
|
|
break
|
|
for row in buf:
|
|
yield row
|
|
finally:
|
|
c = context.cnx.cursor()
|
|
c.execute("CLOSE %s" % cname)
|
|
c.close()
|
|
|
|
def executeOne(self, strict=False):
|
|
results = self.execute()
|
|
if isinstance(results, list):
|
|
if len(results) > 0:
|
|
if strict and len(results) > 1:
|
|
raise koji.GenericError('multiple rows returned for a single row query')
|
|
return results[0]
|
|
elif strict:
|
|
raise koji.GenericError('query returned no rows')
|
|
else:
|
|
return None
|
|
return results
|
|
|
|
|
|
class QueryView:
|
|
# abstract base class
|
|
|
|
# subclasses should provide...
|
|
tables = []
|
|
joins = []
|
|
joinmap = {}
|
|
fieldmap = {}
|
|
default_fields = ()
|
|
|
|
def __init__(self, clauses=None, fields=None, opts=None):
|
|
self.clauses = clauses
|
|
self.fields = fields
|
|
self.opts = opts
|
|
self._query = None
|
|
|
|
@property
|
|
def query(self):
|
|
if self._query is not None:
|
|
return self._query
|
|
else:
|
|
return self.get_query()
|
|
|
|
def get_query(self):
|
|
self._implicit_joins = []
|
|
self._values = {}
|
|
self._order_map = {}
|
|
|
|
self.check_opts()
|
|
|
|
tables = list(self.tables) # copy
|
|
clauses = self.get_clauses()
|
|
# get_fields needs to be after clauses because it might consider other implicit joins
|
|
fields = self.get_fields(self.fields)
|
|
aliases, columns = zip(*fields.items())
|
|
joins = self.get_joins()
|
|
self._query = QueryProcessor(
|
|
columns=columns, aliases=aliases,
|
|
tables=tables, joins=joins,
|
|
clauses=clauses, values=self._values,
|
|
opts=self.opts, order_map=self._order_map)
|
|
|
|
return self._query
|
|
|
|
def get_fields(self, fields):
|
|
fields = fields or self.default_fields or ['*']
|
|
if isinstance(fields, str):
|
|
fields = [fields]
|
|
|
|
# handle special field names
|
|
flist = []
|
|
for field in fields:
|
|
if field == '*':
|
|
# all fields that don't require additional joins
|
|
for f in self.fieldmap:
|
|
joinkey = self.fieldmap[f][1]
|
|
if joinkey is None or joinkey in self._implicit_joins:
|
|
flist.append(f)
|
|
elif field == '**':
|
|
# all fields
|
|
flist.extend(self.fieldmap)
|
|
else:
|
|
flist.append(field)
|
|
|
|
return {f: self.map_field(f) for f in set(flist)}
|
|
|
|
def check_opts(self):
|
|
# some options may trigger joins
|
|
if self.opts is None:
|
|
return
|
|
if 'order' in self.opts:
|
|
for key in self.opts['order'].split(','):
|
|
if key.startswith('-'):
|
|
key = key[1:]
|
|
self._order_map[key] = self.map_field(key)
|
|
if 'group' in self.opts:
|
|
for key in self.opts['group'].split(','):
|
|
self.map_field(key)
|
|
|
|
def map_field(self, field):
|
|
f_info = self.fieldmap.get(field)
|
|
if f_info is None:
|
|
raise koji.ParameterError(f'Invalid field for query {field}')
|
|
fullname, joinkey = f_info
|
|
fullname = fullname or field
|
|
if joinkey:
|
|
self._implicit_joins.append(joinkey)
|
|
# duplicates removed later
|
|
return fullname
|
|
|
|
def get_clauses(self):
|
|
# for now, just a very simple implementation
|
|
result = []
|
|
clauses = self.clauses or []
|
|
for n, clause in enumerate(clauses):
|
|
# TODO checks check checks
|
|
if len(clause) == 2:
|
|
# implicit operator
|
|
field, value = clause
|
|
if isinstance(value, (list, tuple)):
|
|
op = 'IN'
|
|
else:
|
|
op = '='
|
|
elif len(clause) == 3:
|
|
field, op, value = clause
|
|
op = op.upper()
|
|
if op not in ('IN', '=', '!=', '>', '<', '>=', '<=', 'IS', 'IS NOT', '@>', '<@'):
|
|
raise koji.ParameterError(f'Invalid operator: {op}')
|
|
else:
|
|
raise koji.ParameterError(f'Invalid clause: {clause}')
|
|
fullname = self.map_field(field)
|
|
key = f'v_{field}_{n}'
|
|
self._values[key] = value
|
|
result.append(f'{fullname} {op} %({key})s')
|
|
|
|
return result
|
|
|
|
def get_joins(self):
|
|
joins = list(self.joins)
|
|
seen = set()
|
|
# note we preserve the order that implicit joins were added
|
|
for joinkey in self._implicit_joins:
|
|
if joinkey in seen:
|
|
continue
|
|
seen.add(joinkey)
|
|
joins.append(self.joinmap[joinkey])
|
|
return joins
|
|
|
|
def execute(self):
|
|
return self.query.execute()
|
|
|
|
def executeOne(self, strict=False):
|
|
return self.query.executeOne(strict=strict)
|
|
|
|
def iterate(self):
|
|
return self.query.iterate()
|
|
|
|
def singleValue(self, strict=True):
|
|
return self.query.singleValue(strict=strict)
|
|
|
|
|
|
class BulkInsertProcessor(object):
|
|
def __init__(self, table, data=None, columns=None, strict=True, batch=1000):
|
|
"""Do bulk inserts - it has some limitations compared to
|
|
InsertProcessor (no rawset, dup_check).
|
|
|
|
set() is replaced with add_record() to avoid confusion
|
|
|
|
table - name of the table
|
|
data - list of dict per record
|
|
columns - list/set of names of used columns - makes sense
|
|
mainly with strict=True
|
|
strict - if True, all records must contain values for all columns.
|
|
if False, missing values will be inserted as NULLs
|
|
batch - batch size for inserts (one statement per batch)
|
|
"""
|
|
|
|
self.table = table
|
|
self.data = []
|
|
if columns is None:
|
|
self.columns = set()
|
|
else:
|
|
self.columns = set(columns)
|
|
if data is not None:
|
|
self.data = data
|
|
for row in data:
|
|
self.columns |= set(row.keys())
|
|
self.strict = strict
|
|
self.batch = batch
|
|
|
|
def __str__(self):
|
|
if not self.data:
|
|
return "-- incomplete insert: no data"
|
|
query, params = self._get_insert(self.data)
|
|
return query
|
|
|
|
def _get_insert(self, data):
|
|
"""
|
|
Generate one insert statement for the given data
|
|
|
|
:param list data: list of rows (dict format) to insert
|
|
:returns: (query, params)
|
|
"""
|
|
|
|
if not data:
|
|
# should not happen
|
|
raise ValueError('no data for insert')
|
|
parts = ['INSERT INTO %s ' % self.table]
|
|
columns = sorted(self.columns)
|
|
parts.append("(%s) " % ', '.join(columns))
|
|
|
|
prepared_data = {}
|
|
values = []
|
|
i = 0
|
|
for row in data:
|
|
row_values = []
|
|
for key in columns:
|
|
if key in row:
|
|
row_key = '%s%d' % (key, i)
|
|
row_values.append("%%(%s)s" % row_key)
|
|
prepared_data[row_key] = row[key]
|
|
elif self.strict:
|
|
raise koji.GenericError("Missing value %s in BulkInsert" % key)
|
|
else:
|
|
row_values.append("NULL")
|
|
values.append("(%s)" % ', '.join(row_values))
|
|
i += 1
|
|
parts.append("VALUES %s" % ', '.join(values))
|
|
return ''.join(parts), prepared_data
|
|
|
|
def __repr__(self):
|
|
return "<BulkInsertProcessor: %r>" % vars(self)
|
|
|
|
def add_record(self, **kwargs):
|
|
"""Set whole record via keyword args"""
|
|
if not kwargs:
|
|
raise koji.GenericError("Missing values in BulkInsert.add_record")
|
|
self.data.append(kwargs)
|
|
self.columns |= set(kwargs.keys())
|
|
|
|
def execute(self):
|
|
if not self.batch:
|
|
self._one_insert(self.data)
|
|
else:
|
|
for i in range(0, len(self.data), self.batch):
|
|
data = self.data[i:i + self.batch]
|
|
self._one_insert(data)
|
|
|
|
def _one_insert(self, data):
|
|
query, params = self._get_insert(data)
|
|
_dml(query, params)
|
|
|
|
|
|
def _applyQueryOpts(results, queryOpts):
|
|
"""
|
|
Apply queryOpts to results in the same way QueryProcessor would.
|
|
results is a list of maps.
|
|
queryOpts is a map which may contain the following fields:
|
|
countOnly
|
|
order
|
|
offset
|
|
limit
|
|
|
|
Note:
|
|
- asList is supported by QueryProcessor but not by this method.
|
|
We don't know the original query order, and so don't have a way to
|
|
return a useful list. asList should be handled by the caller.
|
|
- group is supported by QueryProcessor but not by this method as well.
|
|
"""
|
|
if queryOpts is None:
|
|
queryOpts = {}
|
|
if queryOpts.get('order'):
|
|
order = queryOpts['order']
|
|
reverse = False
|
|
if order.startswith('-'):
|
|
order = order[1:]
|
|
reverse = True
|
|
results.sort(key=lambda o: o[order], reverse=reverse)
|
|
if queryOpts.get('offset'):
|
|
results = results[queryOpts['offset']:]
|
|
if queryOpts.get('limit'):
|
|
results = results[:queryOpts['limit']]
|
|
if queryOpts.get('countOnly'):
|
|
return len(results)
|
|
else:
|
|
return results
|
|
|
|
|
|
class BulkUpdateProcessor(object):
|
|
"""Build a bulk update statement using a from clause
|
|
|
|
table - the table to insert into
|
|
data - list of dictionaries of update data (keys = row names)
|
|
match_keys - the fields that are used to match
|
|
|
|
The row data is provided as a list of dictionaries. Each entry
|
|
must contain the same keys.
|
|
|
|
The match_keys value indicate which keys are used to select the
|
|
rows to update. The remaining keys are the actual updates.
|
|
I.e. if you have data = [{'a':1, 'b':2}] with match_keys=['a'],
|
|
this will set b=2 for rows where a=1
|
|
|
|
"""
|
|
|
|
def __init__(self, table, data=None, match_keys=None):
|
|
self.table = table
|
|
self.data = data or []
|
|
if match_keys is None:
|
|
self.match_keys = []
|
|
else:
|
|
self.match_keys = list(match_keys)
|
|
self._values = {}
|
|
|
|
def __str__(self):
|
|
return self.get_sql()
|
|
|
|
def get_sql(self):
|
|
if not self.data or not self.match_keys:
|
|
return "-- incomplete bulk update"
|
|
set_keys, all_keys = self.get_keys()
|
|
match_keys = list(self.match_keys)
|
|
match_keys.sort()
|
|
|
|
utable = f'__kojibulk_{self.table}'
|
|
utable.replace('.', '_') # in case schema qualified
|
|
assigns = [f'{key} = {utable}.{key}' for key in all_keys]
|
|
values = {} # values for lookup
|
|
fdata = [] # data for VALUES clause
|
|
for n, row in enumerate(self.data):
|
|
# each row is a dictionary with all keys
|
|
parts = []
|
|
for key in all_keys:
|
|
v_key = f'val_{key}_{n}'
|
|
values[v_key] = row[key]
|
|
parts.append(f'%({v_key})s')
|
|
fdata.append('(%s)' % ', '.join(parts))
|
|
|
|
clauses = [f'{self.table}.{key} = {utable}.{key}' for key in match_keys]
|
|
|
|
parts = [
|
|
'UPDATE %s SET %s\n' % (self.table, ', '.join(assigns)),
|
|
'FROM (VALUES %s)\nAS %s (%s)\n' % (
|
|
', '.join(fdata), utable, ', '.join(all_keys)),
|
|
'WHERE (%s)' % ' AND '.join(clauses),
|
|
]
|
|
self._values = values
|
|
return ''.join(parts)
|
|
|
|
def get_keys(self):
|
|
if not self.data:
|
|
raise ValueError('no update data')
|
|
all_keys = list(self.data[0].keys())
|
|
for key in all_keys:
|
|
if not isinstance(key, str):
|
|
raise TypeError('update data must use string keys')
|
|
all_keys.sort()
|
|
set_keys = [k for k in all_keys if k not in self.match_keys]
|
|
set_keys.sort()
|
|
# also check that data is sane
|
|
required = set(all_keys)
|
|
for row in self.data:
|
|
if set(row.keys()) != required:
|
|
raise ValueError('mismatched update keys')
|
|
return set_keys, all_keys
|
|
|
|
def __repr__(self):
|
|
return "<BulkUpdateProcessor: %r>" % vars(self)
|
|
|
|
def execute(self):
|
|
sql = self.get_sql() # sets self._values
|
|
return _dml(sql, self._values)
|
|
|
|
|
|
# the end
|