debian-koji/builder/kojid
Mike McLean df4a54e204 kojira on demand work
squashed to keep the history more readable

commit b4383d81f48f9c58cb53119cb453034c5676657f
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Jun 21 09:03:07 2024 -0400

    unit tests

commit 151b6ea053fc2e93b104fb3f01749602401fa0ee
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Jun 18 17:55:35 2024 -0400

    unit tests and fixes

commit 15457499665a0c0e0e45b17d19c6d07b6f681ca8
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Jun 18 17:14:01 2024 -0400

    use tag name in waitrepo task for readability

commit a20a21d39d2cb96b02046788de77aa33a7cbc906
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Jun 18 17:00:45 2024 -0400

    cleanup

commit a0058fce436a39de5cde6f11788ca4aaaa3553c0
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Jun 18 16:44:22 2024 -0400

    better approach to repo lookup from task id

commit 057527d71318d4494d80a2f24510e82ac9bc33f8
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Jun 18 10:42:08 2024 -0400

    support priority for requests

commit 882eaf2c4349e6f75db055fa36c80d66ab40526f
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Jun 18 10:16:44 2024 -0400

    track user for request

commit 273739e2f43170d80dae9e3796185230fae0607e
Author: Mike McLean <mikem@redhat.com>
Date:   Mon Jun 17 15:37:16 2024 -0400

    update additional fields in repo_done_hook

commit d0a886eb161468675720549ad8a31921cd5c3647
Author: Mike McLean <mikem@redhat.com>
Date:   Mon Jun 17 15:14:38 2024 -0400

    simplify updateRepos

commit 2a3ab6839299dd507835804e6326d93f08aa4040
Author: Mike McLean <mikem@redhat.com>
Date:   Mon Jun 17 15:03:39 2024 -0400

    kojira: adjust cleanup of self.repos

commit dfc5934423b7f8f129ac9c737cc21d1798b33c2d
Author: Mike McLean <mikem@redhat.com>
Date:   Mon Jun 17 14:03:57 2024 -0400

    docs updates

commit 4c5d4c2b50b11844d5dd6c8295b33bcc4453928b
Author: Mike McLean <mikem@redhat.com>
Date:   Mon Jun 17 09:18:10 2024 -0400

    Apply repo_lifetime to custom repos even if current

commit 2b2d63a771244358f4a7d77766374448343d2c4c
Author: Mike McLean <mikem@redhat.com>
Date:   Mon Jun 17 09:36:50 2024 -0400

    fix migration script

commit 447a3f47270a324463a335d19b8e2c657a99ee9b
Author: Tomas Kopecek <tkopecek@redhat.com>
Date:   Fri Jun 7 11:32:14 2024 +0200

    migration script

commit f73bbe88eea7caf31c908fdaa5231e39d0f0d0a8
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Jun 14 15:30:24 2024 -0400

    clean up some TODO items

commit 836c89131d2b125c2761cfbd3917473504d459e4
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Jun 14 11:43:13 2024 -0400

    update unit tests

commit 4822ec580b96ae63778b71cee2127364bc31d258
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Jun 14 11:17:24 2024 -0400

    streamline simple case for tag_first/last_change_event

commit 3474384c56a8a2e60288279b459000f3b9c54968
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Jun 11 16:11:55 2024 -0400

    backwards compatible age checks in kojira

commit e796db0bdc6e70b489179bcddaa899855d64b706
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Jun 14 11:49:37 2024 -0400

    repowatch unit test fixes

commit 7f17eb741502ab5417f70413f699c99e140f380d
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Jun 6 21:35:11 2024 -0400

    adjust watch output; die if request fails

commit a0318c44576d6acab459f623c8ff0ab6961bd6b4
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Jun 6 20:45:56 2024 -0400

    handle problem repos

commit d90ca6f9d41a39da86089a0fad7afdb649fd680b
Author: Mike McLean <mikem@redhat.com>
Date:   Thu May 30 22:43:56 2024 -0400

    fix typos

commit 29830d1b8125664ddeae5ccb7e6b6e53260cdc47
Author: Mike McLean <mikem@redhat.com>
Date:   Thu May 30 16:57:48 2024 -0400

    clarify --wait-repo help text

commit 43db92302643b67e7f6f419424d6813e5dca53f3
Author: Mike McLean <mikem@redhat.com>
Date:   Tue May 21 17:32:44 2024 -0400

    unit tests

commit 27f979fbccc5a286fba9caeec16ca7092fa79813
Author: Mike McLean <mikem@redhat.com>
Date:   Tue May 21 17:23:32 2024 -0400

    wait-repo compat

commit f3a8f76d9340b1bdddb5f7bab154962e848d4d10
Author: Mike McLean <mikem@redhat.com>
Date:   Thu May 16 20:14:59 2024 -0400

    fixes

commit 6638b0fd76b31aa49ad0cf79639014ad9ace09f0
Author: Mike McLean <mikem@redhat.com>
Date:   Thu May 16 16:41:50 2024 -0400

    use old regen-repo code for older hubs

commit 7f2d8ec49fe1d2d511759221a821a146a4ef6837
Author: Mike McLean <mikem@redhat.com>
Date:   Thu May 16 16:18:36 2024 -0400

    fixes

commit 791df709c10d3c10c9b79f59f4fda435ac3bd285
Author: Mike McLean <mikem@redhat.com>
Date:   Thu May 16 12:22:09 2024 -0400

    don't trigger regens from scheduler. kojira is enough

commit 75f5e695287b92d53e4f173f57b12b5a7159adaf
Author: Mike McLean <mikem@redhat.com>
Date:   Wed May 15 22:54:08 2024 -0400

    more docs

commit 0e0f53160bbe09e35409dabce63739eb50813310
Author: Mike McLean <mikem@redhat.com>
Date:   Wed May 15 21:49:27 2024 -0400

    support MaxRepoTasksMaven

commit 88da9639860cb7c0d92f7c3bc881cd480b4e1620
Author: Mike McLean <mikem@redhat.com>
Date:   Wed May 15 16:15:12 2024 -0400

    drop unused method

commit 4cdbe6c4d2ba8735312d0cd0095612c159db9cce
Author: Mike McLean <mikem@redhat.com>
Date:   Wed May 15 15:48:55 2024 -0400

    api for querying repo queue

commit 2367eb21e60865c8e5a2e19f2f840938dbbbc58b
Author: Mike McLean <mikem@redhat.com>
Date:   Wed May 15 15:24:44 2024 -0400

    flake8

commit 811378d703a68b63c577468b85f4a49a9be2c441
Author: Mike McLean <mikem@redhat.com>
Date:   Tue May 14 16:20:59 2024 -0400

    record custom opts in repo.json

commit d448b6b3417e95bff2bae3b5a3790877ac834816
Author: Mike McLean <mikem@redhat.com>
Date:   Mon May 13 15:32:33 2024 -0400

    drop unused RawClauses code

    will revisit in a later PR

commit 0422220e05ee3d43e5431a0d741f3632f42a8434
Author: Mike McLean <mikem@redhat.com>
Date:   Sat May 11 13:34:12 2024 -0400

    clean up BulkUpdateProcessor and add tests

commit 6721f847e655a3794d4f2fce383070cb6ad2d2d1
Author: Mike McLean <mikem@redhat.com>
Date:   Fri May 10 17:43:17 2024 -0400

    fix unit test after rebase

commit 833286eead2b278a99fe9ef80c13df88ca3af48c
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Apr 5 00:23:15 2024 -0400

    adjust valid_repo opts checks

commit 7f418d550d8636072292ee05f6e9748b622c2d89
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Apr 5 00:03:33 2024 -0400

    extend valid_repo unit test and fix a bug

commit eb844ba15894cb7fc2a739908e7d83c80fd82524
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Apr 4 15:41:08 2024 -0400

    test_request_existing_req_invalid

commit 2e290453abf9ac31f51a1853aa123a2a34ad9605
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Apr 4 15:22:06 2024 -0400

    test_request_at_event

commit 2c3389c24f5cabfbbaeb70512a4ba917cf5bd09b
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Apr 4 11:14:37 2024 -0400

    test_request_new_req

commit 2cdeab9b5f5b0bff4c4806ae802e5f5e571bb25e
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Apr 4 10:56:36 2024 -0400

    test_request_existing_req

commit 63c9ddab5f3e50b3537a82f390e9da5a66275a25
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Apr 4 10:45:22 2024 -0400

    test_request_existing_repo

commit 03b5ba5c57ce1ade0cf7990d23ec599c8cb19482
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Apr 4 10:04:36 2024 -0400

    more stubs

commit 92d16847f2cc2db0d8ee5afcf2d812b9bb6467ec
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Apr 3 22:44:00 2024 -0400

    fix import

commit 1f621685532564a1c1ac373e98bec57c59107e6c
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Apr 3 22:16:25 2024 -0400

    stub test

commit 45eef344e701c910f172d5642676d8f70d44049a
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Apr 3 22:01:31 2024 -0400

    link repo doc in toc

commit bfffe233051c71785c335a82f64bf2abaae50078
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Apr 3 21:57:35 2024 -0400

    unused options

commit 19f5a55faecf8229d60d21fd3e334e9a7f813384
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Apr 3 16:37:50 2024 -0400

    include new setting

commit b7f81bd18016f862d1246ab6c81172fcd9c8b0ed
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Apr 3 08:21:16 2024 -0400

    test + fixes

commit 16564cfb8e2725b395c624139ce3d878a6dd9d53
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Apr 3 07:44:15 2024 -0400

    more kojira unit tests

commit 6b55c51302331ea09a126b9f3efbc71da164c0fb
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Apr 3 07:06:20 2024 -0400

    fix unit test

commit 0b000c124b17f965c5606d30da792ba47db542cf
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Apr 2 22:07:08 2024 -0400

    refactor repo delete

commit 0a03623fb018c80c8d38896fc99686cac56307fa
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Apr 2 19:13:15 2024 -0400

    avoid circular import issue

commit 137d699b7653977f63f30041d9f5f1a88ae08d43
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Apr 2 19:03:18 2024 -0400

    some kojira cleanup

commit 252e69d6dd17bb407b88b79efbb243ca5e441765
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Apr 2 17:21:14 2024 -0400

    adjust state transition check

commit 336018081709fd44e7f12933b1ea59e02bff4aed
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Apr 2 16:05:45 2024 -0400

    update RepoQuery

commit 68bb44848d9024c5520d8e7e2cc262adaa083cd1
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Mar 12 11:46:59 2024 -0400

    decode query bytes in log

commit 818431fb9b09db162e73f7cb1adcddc8b151c821
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 29 14:47:16 2024 -0400

    sanity check requests before reusing

commit 63fee0ba1ea9d41d504bb09aeaea064246c16ff9
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 29 11:41:13 2024 -0400

    repo.query api call

commit bcf9a3cf64167612e3cd355aae7c41dd348cb8db
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 29 10:31:58 2024 -0400

    reduce some cli code duplication

commit 3e870cfd088c69c4aaaa9a0f938bcce740b3f42c
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Mar 28 18:27:18 2024 -0400

    tweak warnings in external repo check

commit 0dfda64b806f2377d9c591105c83a4f05851b17a
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Mar 28 14:43:50 2024 -0400

    clean repo queue

commit e5d328faa00c74e087f0b0d20aea7cd79ffb5ee4
Author: Mike McLean <mikem@redhat.com>
Date:   Thu Mar 28 14:05:12 2024 -0400

    implement retry limit for repo queue

commit 2185f3c9e32747c9657f2b9eb9ce6e3ca6d06ff8
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Mar 27 22:40:13 2024 -0400

    cleanup a few TODOs

commit b45be8c44367bca9819561a0e928999b9a9e2428
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Mar 27 22:22:17 2024 -0400

    tweak test

commit 546b161e20d0b310462dda705ae688e25b385cf5
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Mar 27 13:43:06 2024 -0400

    more kojira tests

commit f887fdd12e59e36be561c1a89687a523e112b9d4
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Mar 26 20:16:11 2024 -0400

    unit tests for RepoWatcher

commit e78b41431f3b45ae9e09d9a246982df9bb2c2374
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Mar 26 10:53:14 2024 -0400

    fix unit tests

commit 64328ecb27e5598ec8977617e67d6dd630bc8db7
Author: Mike McLean <mikem@redhat.com>
Date:   Mon Mar 25 14:03:19 2024 -0400

    custom opts sorted out?

commit e3cee8c48bcf585a1a14aa8e56e43aaba2ccd63b
Author: Mike McLean <mikem@redhat.com>
Date:   Mon Mar 25 12:50:34 2024 -0400

    allow containment operator

commit bef7bbc3b2a16a6643bedb47be044c202a2bad2d
Author: Mike McLean <mikem@redhat.com>
Date:   Mon Mar 25 11:59:15 2024 -0400

    partial

commit 01788dfe386a07960c5c7888350e3917b44a0bab
Author: Mike McLean <mikem@redhat.com>
Date:   Sat Mar 23 13:47:22 2024 -0400

    fragment: struggling with repo opt timing

commit 44504bfbde4cf981391ea02127a05c4f0c2fc4a3
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 17:14:57 2024 -0400

    fine to have default values in the class

commit 1bfa520dd599acccd45f221f71c64fbefc3b5554
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 17:14:18 2024 -0400

    option renamed

commit a5db9d015a25f71fdb5e2dadcae55a8c5b7ec956
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 17:04:32 2024 -0400

    flake8

commit c02244f8018b651f309f39eb60f926209454dea2
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 16:59:15 2024 -0400

    more config options in repos.py

commit 9bf3edc0cf2c85a23964b79c4489bc9592656f16
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 15:39:52 2024 -0400

    use requests by default in regen-repo

commit 78c6e8a4459856fa333763b1977633307fd81cc3
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 13:49:00 2024 -0400

    adjust watch_fields

commit eadb2a24b9e0f324ac053c4bdede0865d4ed5bfa
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 12:27:23 2024 -0400

    adjust event validation

commit 3140e73cfccdcc25765c6f330073c991a44cbd9a
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 12:01:24 2024 -0400

    wait-repo tweaks

commit d1a8174cdd917bbf74882c51f1a7eaf4f02e542a
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 10:35:28 2024 -0400

    cli: wait-repo-request command

commit b2d08ac09880a1931b7f40b68d5ca765cd49a3a6
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 10:04:46 2024 -0400

    drop complex request options from wait-repo

commit b4ab55f241a693c0c0d08e386f998394a295fc7c
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 09:36:37 2024 -0400

    fix call

commit c04417439c4684342ac0d4423b341d363bc80e92
Author: Mike McLean <mikem@redhat.com>
Date:   Fri Mar 22 09:32:48 2024 -0400

    typo

commit 29be83b1523d45eb77cfe4959c9d6bc5c940ebbe
Author: Mike McLean <mikem@redhat.com>
Date:   Wed Mar 20 07:28:12 2024 -0400

    partial...

commit cd0ba3b6c2c47fe5bac4cf823b886462e092e2b3
Author: Mike McLean <mikem@redhat.com>
Date:   Tue Mar 19 23:13:47 2024 -0400

    drop event="new" code

commit 7f4f2356eceec03228e4a92b13e5593f956c390d
Author: Mike McLean <mikem@redhat.com>
Date:   Mon Mar 18 21:00:25 2024 -0400

    kojira on demand work

    squashed because the branch was getting unwieldy
    mostly working at this point, but there is a bit out outstanding work

    commit e127878460a932cc77c399f69c40f0993c765dc7
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Mar 18 11:20:33 2024 -0400

        stale comment

    commit d0849d50b865f4f3783ddde5e1e6cf10db56ed39
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 23:58:13 2024 -0400

        don't expire at_event repos

    commit 8866db0e25b072aa12cc2827c62093b000fa7897
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 23:43:24 2024 -0400

        typo

    commit e2a5fd639b88c7b88708e782f0b7398296d2f805
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 23:40:08 2024 -0400

        repos.py: support at_event

    commit 6518f1656976ea2beb2cf732c82db0f159b09d15
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 22:20:35 2024 -0400

        update repo symlink logic

    commit 50d5e179f56393dd52c7225fc6f053d0095e9599
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 22:20:01 2024 -0400

        ...

    commit 429fc85b391e0b5e637e20859f1094a37a5eab39
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 21:18:44 2024 -0400

        block owner opt in makeTask and host.subtask

    commit 40fcfe667ef70987444756f6d5554919d89fb1de
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 20:49:37 2024 -0400

        db lock for repo queue

    commit dfd94fac8fb96328b12bcf2f8f6f7e2d52deea85
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 17:47:39 2024 -0400

        ...

    commit ecd9611e5d84d8a98920c40805616a6376ca652e
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 17:45:38 2024 -0400

        move new exports around

    commit a2e086df07f7b03dc4505a61f9b213e6e2ff20a5
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 16:46:29 2024 -0400

        drop noisy debug line

    commit 497bd773baa274d205df3bba317ee80617cc56a0
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 16:20:56 2024 -0400

        ...

    commit 457c986894de754a927bc4880687e0f47c29cbdd
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 16:19:12 2024 -0400

        ...

    commit 3aa0fa4862b37b7d178b1b7bb9a521ea01e7dded
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 16:18:30 2024 -0400

        ...

    commit 391c2009671dea1270cce01666d04ad2ade0c323
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 16:15:32 2024 -0400

        ...

    commit f3794e2acc8eef38e0c65fb27d3b2b3a58f53311
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 16:12:53 2024 -0400

        ...

    commit aea5e1a91f9246cce5f162bbea3d4846e87b9811
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 16:11:53 2024 -0400

        ...

    commit dc68ed8f0a43c9418c0c813f05a761bc8303c2b0
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 16:10:34 2024 -0400

        typo

    commit 73c72c8ed08744a188e4ae977b7ba2d92c75401b
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 16:08:15 2024 -0400

        pruning tweaks

    commit d3a10f8d5ef77a86db0e64a845f360d9f2cc2e17
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 15:50:57 2024 -0400

        kojira: use ordered dict for delete queue

    commit f6d7d44bac22840ee3ae1a93375c3b5ad430869c
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 14:59:05 2024 -0400

        rework repo expiration and lifetimes a bit

    commit 8bb91611c05ccb5d91910718a07494c08665ec22
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 00:27:34 2024 -0400

        more kojira rework

    commit 368d25a31d61eae8712591183bd2db1ff78f59d1
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 17 00:27:17 2024 -0400

        cleanup

    commit 292a1e4fdcc4098137156a42072e5bfda2f711df
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Mar 16 23:51:45 2024 -0400

        track update time for repos

    commit 01a7469ef7bcd952f45d732e4bb3b5f4bab2338a
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Mar 16 17:42:42 2024 -0400

        factor in implicit joins for fields="*"

    commit f9aba4557108b2005cf518e4bf316befa7f29911
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Mar 16 15:25:34 2024 -0400

        partial repo docs

    commit 74eae7104849237a4049a78c94b05187a2219f74
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Mar 16 13:17:36 2024 -0400

        remove some obsolete code from kojira

    commit d883807967a0d6d67a6e262a119ff5e03b8a947e
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Mar 16 11:42:48 2024 -0400

        ...

    commit 3bc3aa98913463aa209bba1cecc71fc30f6ef42f
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Mar 16 11:12:50 2024 -0400

        do_auto_repos

    commit da69f05555f05ded973b4ade064ed7e5f7e70acd
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Feb 23 14:56:30 2024 -0500

        fakehub: option to override config

    commit 13a4ffdf9cd915b6af7b85120d87d50b8f6db5ed
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Mar 15 22:35:50 2024 -0400

        tweak logging

    commit 01af487cced25c0edaa9e98e5dc7bb7dc9c4d6bd
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Mar 15 22:16:21 2024 -0400

        adjust archlist for external repo check

    commit eb1c66f57a508f65dcac0e32cfaa3e178ed40bad
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Mar 15 18:45:53 2024 -0400

        tweak logging; wait-repo --new

    commit 3dab52d497926a6be80a3c98cc29f0cb6478926f
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Mar 15 15:03:23 2024 -0400

        typo

    commit 503365a79998aa2ee0eb2bd9b412747cdec50ab1
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 14 00:17:24 2024 -0400

        ...

    commit 46ec62e96334690344de18d535f7b9c4fd87d877
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 14 00:16:09 2024 -0400

        separate get/set for erepo data

    commit 25c2861509cfebcfc38be5fff6c0b382dfcca224
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 13 09:08:45 2024 -0400

        only update erepo data in db if it changed

    commit bc5db7494a486ae39b99dba4875547a8e8bc1ee0
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 13 09:03:03 2024 -0400

        ...

    commit 55b947fe2889dcb3b6112e9e80de926ef0ab70fa
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 13 08:48:45 2024 -0400

        partial work

    commit 7e91985a378754ae2ba88e0e2182bdf6302416ef
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 13 08:22:23 2024 -0400

        handle external_repo_data history in cli

    commit 0aeae31215af98ea8580307750389873f1e2521e
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 13 08:15:50 2024 -0400

        set_external_repo_data

    commit d85e93c0c294770d2384a41a3f2c09b4a64ae3c4
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 13 07:58:18 2024 -0400

        support external_repo_data in query_history

    commit 88fcf7ac5b8893bd045af017df1eb22a3cce8cb0
    Merge: 8449ebfeb eba8de247
    Author: Mike McLean <mikem@redhat.com>
    Date:   Tue Mar 12 00:01:57 2024 -0400

        Merge remote-tracking branch 'origin' into kojira-on-demand

    commit 8449ebfeb7976f5a5bfea78322c536cf0db6aa54
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Mar 11 23:56:25 2024 -0400

        drop stray file

    commit 3d3716454b9f12c1807f8992ecd01cde3d9aade9
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Mar 11 23:49:20 2024 -0400

        flake8

    commit f9014b6b689e5a1baf355842cf13905b8c50c3d8
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Mar 11 23:44:32 2024 -0400

        handle deleted tags sanely in tag_last_change_event

    commit 7d584e99a1a580039d18210c2cc857eb3419394f
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Mar 11 14:50:07 2024 -0400

        typo

    commit 6ac5921ce55ed356ba8c66466ebf56bb424591a9
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Mar 11 14:49:35 2024 -0400

        add external_repo_data table. check ext repo tables for first/last tag change events

    commit e107400463679113971daaa400d75ec006f4dca5
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Mar 11 12:14:21 2024 -0400

        fix newer_than logic in WaitrepoTask

    commit 4a1175a35e6ad7c59b3622a6028e2cd68e29bb79
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 10 23:47:29 2024 -0400

        todos

    commit c13d9e99d19bc40e59fd136b540b6a8c6e12a50f
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 10 23:30:59 2024 -0400

        AllowNewRepo hub config

    commit e3176cda238d3357fed0b905b03dfc0319dab12e
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 10 23:00:45 2024 -0400

        fixes

    commit d486960a441fbb517492a61ef2529370035a765a
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 10 22:48:00 2024 -0400

        request min_event never null or in future

    commit 4cc0d38b8e4bf1254bb156d085614f83929e1161
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 10 22:32:45 2024 -0400

        ...

    commit bb0dc41cd6be4c42d4cd033e07210f1184c2c385
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 10 22:23:52 2024 -0400

        default min_event. don't allow future events

    commit 1dccf0a56b1e3f83107760111264249527abeb68
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 10 17:27:11 2024 -0400

        use BulkUpdateProcessor in update_end_events

    commit 03c791edd3bb49359f2a01eaf53cbb717c53833e
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Mar 10 17:26:26 2024 -0400

        BulkUpdateProcessor

    commit 4bd2a0da1c998ce14fd856e68318551747867e06
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Mar 8 14:53:53 2024 -0500

        update_end_events()

    commit b45b13bcba141ea6b30618fb76c1a94593dfe569
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Mar 8 13:03:33 2024 -0500

        record begin/end events in repo_init

    commit 6f1adf51d9e24f80369df8b96010c0d6d123b448
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Mar 8 12:33:40 2024 -0500

        QueryView: accept single field value

    commit 6b292d9a4b1bda56ff8091fbcb126749f952d045
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Mar 8 12:28:02 2024 -0500

        adjust query fields

    commit e9e8e74703de8b6c531944c05d54447f0d7cb13f
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Mar 8 12:18:12 2024 -0500

        QueryView: adjust special field name handling

    commit 97d910d70634183a3d5ae804176a5c8691882b7a
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Mar 8 11:45:54 2024 -0500

        adjust event fields

    commit c70d34805227a61ab96176537dae64db3883e58f
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 23:37:29 2024 -0500

        honor owner opt to make_task

    commit 40601d220179eb9718023002f8811ce5cbd09860
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 23:29:50 2024 -0500

        ...

    commit 6f84ca3aa8c24d4618294027dce7a23620a3e2d7
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 23:24:22 2024 -0500

        typo

    commit c423b8a4cc5fd4ed5c762e7b5adc06449c72ea70
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 23:22:18 2024 -0500

        use kojira user for repo tasks

    commit 63dacff462ce064bbdf0b5c6e8ef14b2abe08e0c
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 23:05:12 2024 -0500

        hook to fulfill requests when repos are marked ready

    commit aa79055c1e404a4c4fa9ac894fe978c8f9827f72
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 01:08:19 2024 -0500

        no more data field

    commit 7dd029fb94e24004793e2d1232b3225b3cee5c97
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 01:01:41 2024 -0500

        use full opts in request entries too

    commit 73dc2f232b231467d12355af0ace14284f5422a8
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 00:54:41 2024 -0500

        ...

    commit 414d0a55cf66d93b6fb79e9677f68fd141edc655
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 00:54:01 2024 -0500

        propagate opts in repo_init

    commit 99c1dde4771164d215f8c9a9acc0dadb678d047b
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 00:20:57 2024 -0500

        include opts in query

    commit 08289b3444612920856e6a949a379f61cb46b5e7
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 00:15:12 2024 -0500

        missing import

    commit bc3ca72c084b8e8de678ecbdcf6bbcfe972363e1
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Mar 7 00:10:45 2024 -0500

        more opts support

    commit f7c12cfe5f5b6c6c7895cd5eb4cdeb45757022a1
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 6 23:59:08 2024 -0500

        handle repo opts in request call

    commit 02a75f3996d59ae36f046327fca766e8799ef35b
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 6 22:01:06 2024 -0500

        fix import

    commit 7fe52dc83a80c0f68580d274bd2e60c57ab2e26d
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 6 21:58:59 2024 -0500

        fix fields

    commit f016c3a46d901ca762f5e8824fcd5efad2eede57
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 6 21:47:40 2024 -0500

        move code into kojihub/repos

    commit 9953009d3cc6f08cd16cbaa593ae79796ac86fa2
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 6 21:15:17 2024 -0500

        more unit test fixes

    commit f5decfaff3f56601262752e8a06b6f97bc4cfb33
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 6 20:51:07 2024 -0500

        unit test

    commit b51d4979824abe6ddc402011d21394854f46687e
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Mar 6 20:19:06 2024 -0500

        flake8

    commit aeee5b59df4e9da93db83874f022419c24b37162
    Author: Mike McLean <mikem@redhat.com>
    Date:   Tue Feb 20 18:05:25 2024 -0500

        stub: tracking opts

    commit b5c150b52f575c681bdacb4c87e690653edc465a
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Feb 19 15:11:40 2024 -0500

        different approach for raw clauses

    commit a9001c97935f3ad90571589688b1f291242bad08
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Feb 19 14:32:57 2024 -0500

        and any necessary values and joins

    commit 84a46633b7dc1303e48367b614b99de3730a865d
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Feb 19 14:17:12 2024 -0500

        give hub code a way to raw clauses with QueryView

    commit 5d43c18f56563fc14f12d12c57f044125a5b33f9
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Feb 19 14:09:27 2024 -0500

        private vars

    commit 91992f2e7b0a6cdd5e7cf8b99f6c37cfb20b08a6
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Feb 19 14:02:07 2024 -0500

        saner data from get_fields

    commit 1e581cd5a5f3a6e257c3147a8ea763987984403c
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Feb 19 13:26:34 2024 -0500

        update test and include tag_first_change_event()

    commit 3509300b0b1c6bb516b5552f2b1d37008231efae
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Feb 19 12:42:53 2024 -0500

        revert global verbose option

    commit 4173e8610b0beed3dcea14849da1f115eb43c293
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Feb 19 07:59:48 2024 -0500

        better ordering support in QueryView

    commit 359543b95cd524d5f4d8d82854680452ee07fd00
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Feb 18 01:19:30 2024 -0500

        also include test from multirepo

    commit 1ceb8c01f92cfe5029c78688b14f643e1fa8be12
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Feb 18 00:18:39 2024 -0500

        constraint

    commit 064bfc18b3a07edd602192bc4f48ac52adeedc3f
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sun Feb 18 00:00:15 2024 -0500

        tagFirstChangeEvent, plus fix

    commit 0efbfed21ec3b66841a7e4996e59bc8aaeed352b
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 22:37:08 2024 -0500

        fix

    commit 3ead49b9ed7f643e7ba2db2077993eb515f10e38
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 21:54:05 2024 -0500

        cleanup

    commit be2beb37fd35b46a5b4d60f39c8040640dfc7800
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 21:20:29 2024 -0500

        rename request field, clean up Watcher args

    commit d392a974a1cbba119abc6a9e99e54d45a0cf0d62
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 18:38:21 2024 -0500

        ...

    commit 70ee37dbafc6c4e77a62aac44f11747c0f6bfc25
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 18:37:08 2024 -0500

        use tagLastChangeEvent for min_event=last

    commit 82d0d77679afc163bb5c36e43f834c109d7e6371
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 18:33:04 2024 -0500

        tag_last_change_event: support inheritance

    commit c3c87f8ccf4feea321d9bfa54cc1f223431a8d13
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 17:55:10 2024 -0500

        waitrepo anon mode (no request)

    commit c6994353d8daa4cb615eae4dde0368b97ea33d18
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 09:32:39 2024 -0500

        don't reuse a request for a future event

    commit 22abfadc57adcf11229336eede6459585a293da6
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 09:16:47 2024 -0500

        ...

    commit c7b899c4a62d667d96e8320b6fa96106972f5859
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 09:10:22 2024 -0500

        ...

    commit a185fd86766c283fd9c18a4d95546a8e36fd21c9
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 09:08:31 2024 -0500

        ...

    commit 87401bddac38ebb658f2e9e4fbe36af2e6010e42
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 09:06:48 2024 -0500

        ...

    commit bb72bd0e2d78f2d21168144a976e772473efeb16
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 08:59:44 2024 -0500

        ...

    commit 4dbeb0edfa55cf39f4c897b3c15345e2daf9dad6
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 08:59:10 2024 -0500

        ...

    commit 994e13d538d580ea9f7499310b8a0e4cd841af07
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 08:57:22 2024 -0500

        ...

    commit 1fee9331e72e4d48eccfd640183563a909181af6
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 08:53:06 2024 -0500

        ...

    commit e74eea41048a5ec6f4a9c52025c2e452f640a808
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 00:57:11 2024 -0500

        ...

    commit ec1a581ba23b292ab840b740dabd1f3e4854fe33
    Author: Mike McLean <mikem@redhat.com>
    Date:   Sat Feb 17 00:48:48 2024 -0500

        attempting to wire this up into newRepo and waitrepo task

    commit 7eee457230a2b0e6aa9b974e94e4ca516227a196
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Feb 16 18:58:18 2024 -0500

        ...

    commit 1c719d642da5f5c2ca0b7ce9af170054767423c6
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Feb 16 18:56:11 2024 -0500

        adjust checkRepoRequest return

    commit e6e5f15961c7801b1777743b799fbe2c96a08138
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Feb 16 18:00:27 2024 -0500

        handle repo requests in scheduler loop

    commit a0dde4e3625110671bcea7abbdab0f0c03142cbc
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Feb 16 11:06:00 2024 -0500

        tweak repo report in taginfo cli

    commit 2d860a17caf770507c67a89ac234d17c200c30ab
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Feb 16 10:46:13 2024 -0500

        enable/clarify new repo fields

    commit 7204ce3753450981300bf78102fc40f1b41786b4
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Feb 16 09:38:59 2024 -0500

        syntax

    commit 96236f4ef93e5babeb0800b5b4a16117a3e8c1df
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Feb 16 10:20:34 2024 -0500

        pull tag_last_change_event and repo fields from multirepo branch

    commit a707c19eda9bc6efc22ce004367cbee960fcccb6
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Feb 16 09:26:07 2024 -0500

        partial: check_repo_queue

    commit a208d128e60bdb4ad531938d55b2c793b65ab24b
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 15 19:35:03 2024 -0500

        ...

    commit e9a601059fb9ceb89ec9b84680afd6dc276424f9
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 15 19:22:55 2024 -0500

        ...

    commit 067e385861766d7a355d5671a1e1e73ebd737b97
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 15 19:14:11 2024 -0500

        use RepoView more

    commit e5b4a58b65c6f195f724fb135acea6dd18abc3c2
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 15 17:37:47 2024 -0500

        executeOne

    commit 45aecfeb0a32c097fc65574296958573e6405009
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 15 17:29:06 2024 -0500

        ...

    commit 41314dc10c3a1a13f39628de5caedc7486193c7b
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 15 17:27:40 2024 -0500

        only return one req

    commit c44ed9e4e3bc349e4107df79847049503a2c75be
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 15 14:57:11 2024 -0500

        ...

    commit cfd60878ada8196616fd401fb6cbaf7aa2dcc98b
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 15 11:10:31 2024 -0500

        ...

    commit 11f65335ca9c6167b8f457460a58471c37ae4098
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 15 09:12:34 2024 -0500

        testing

    commit c05f8f3b3f64c3aeef5ff0296dc181123c756952
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Feb 14 22:52:14 2024 -0500

        flesh out stub

    commit fd9c57c2c95bb5a1bd051d9d1e7e73e2f3fcb9b0
    Author: Mike McLean <mikem@redhat.com>
    Date:   Wed Feb 14 22:26:19 2024 -0500

        ...

    commit d59f38a5adc90607556a1671c85b808209389edd
    Author: Mike McLean <mikem@redhat.com>
    Date:   Tue Feb 6 22:19:36 2024 -0500

        more fragments

    commit 2d1b45c66e1cc3f41f6812b7b6d4bd66c4acf419
    Author: Mike McLean <mikem@redhat.com>
    Date:   Tue Feb 6 20:38:04 2024 -0500

        XXX DEBUG CODE

    commit d8e3a4bd205acb5ec1940fa30e29701f0a358d51
    Author: Mike McLean <mikem@redhat.com>
    Date:   Tue Feb 6 20:37:52 2024 -0500

        ...

    commit 0744a29bd303bf9b381aa48e3e5dd98e8b7373ef
    Author: Mike McLean <mikem@redhat.com>
    Date:   Tue Feb 6 20:37:40 2024 -0500

        ...

    commit 0726f8d22b227e002f7ddd927829a1e3ec66681f
    Author: Mike McLean <mikem@redhat.com>
    Date:   Tue Feb 6 20:27:22 2024 -0500

        RepoWatcher stub

    commit a74a74ef9688b1d27b528dd8e2de8ff3b63f97ae
    Author: Mike McLean <mikem@redhat.com>
    Date:   Tue Feb 6 00:05:49 2024 -0500

        ...

    commit d68c2902015a4998f59355aa224924e5ace21b0a
    Author: Mike McLean <mikem@redhat.com>
    Date:   Mon Feb 5 08:18:56 2024 -0500

        ...

    commit ff8538344e1bf24d7b94ad45f26fb1548be4782d
    Author: Mike McLean <mikem@redhat.com>
    Date:   Fri Feb 2 00:00:41 2024 -0500

        partial

    commit f618ed321108e0094ab95e054cb5d53fb2e0dfe1
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 1 23:54:57 2024 -0500

        tweak unit test

    commit 208a2f441401cefd65a7a92d91b6b76bf5dd97d3
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 1 22:52:37 2024 -0500

        comments

    commit 8fe5b4f0d773f190c037ab95520623a3d250c069
    Author: Mike McLean <mikem@redhat.com>
    Date:   Thu Feb 1 01:43:28 2024 -0500

        repo_queue stub
2024-06-21 14:35:59 -04:00

6985 lines
299 KiB
Python
Executable file

#!/usr/bin/python2
# Koji build daemon
# Copyright (c) 2005-2014 Red Hat, Inc.
#
# Koji is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation;
# version 2.1 of the License.
#
# This software is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this software; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
# Authors:
# Mike McLean <mikem@redhat.com>
# Mike Bonnet <mikeb@redhat.com>
from __future__ import absolute_import, division
import copy
import errno
import filecmp
import glob
import grp
import io
import json
import logging
import logging.handlers
import os
import pwd
import random
import re
import shutil
import signal
import smtplib
import socket
import subprocess
import sys
import time
import traceback
import xml.dom.minidom
import zipfile
from fnmatch import fnmatch
from gzip import GzipFile
from optparse import SUPPRESS_HELP, OptionParser
try:
# Due to https://bugzilla.redhat.com/show_bug.cgi?id=1923971
# and https://pagure.io/koji/issue/2964
# guestfs needs to be imported before dnf, so json libraries will
# load in non-breaking order (guestfs would be otherwise imported
# by ImageFactory/Oz
import guestfs # noqa: F401
except ImportError:
pass
import dnf
import Cheetah.Template
import librepo
import requests
import rpm
import six
import six.moves.xmlrpc_client
from multilib import multilib
import koji
import koji.arch
import koji.plugin
import koji.rpmdiff
import koji.tasks
import koji.util
from koji.daemon import SCM, TaskManager, incremental_upload, log_output
from koji.tasks import (
BaseTaskHandler,
MultiPlatformTask,
ServerExit,
ServerRestart,
RefuseTask,
)
from koji.util import (
dslice,
dslice_ex,
format_shell_cmd,
isSuccess,
joinpath,
parseStatus,
to_list,
)
try:
import requests_gssapi as reqgssapi
Krb5Error = reqgssapi.exceptions.RequestException
except ImportError: # pragma: no cover
try:
import requests_kerberos as reqgssapi
Krb5Error = reqgssapi.exceptions.RequestException
except ImportError: # pragma: no cover
reqgssapi = None
# imports for LiveCD, LiveMedia, and Appliance handler
try:
import pykickstart.parser as ksparser
import pykickstart.handlers.control as kscontrol
import pykickstart.errors as kserrors
import iso9660 # from pycdio
image_enabled = True
except ImportError: # pragma: no cover
image_enabled = False
try:
from imgfac.BuildDispatcher import BuildDispatcher
from imgfac.Builder import Builder
from imgfac.PluginManager import PluginManager
from imgfac.ReservationManager import ReservationManager
plugin_mgr = PluginManager('/etc/imagefactory/plugins.d')
plugin_mgr.load()
from imgfac.ApplicationConfiguration import ApplicationConfiguration
from imgfac.PersistentImageManager import PersistentImageManager
from imgfac.BaseImage import BaseImage
from imgfac.TargetImage import TargetImage
# NOTE: import below requires Factory 1.1.7 or higher
from imgfac.FactoryUtils import qemu_convert_cmd
ozif_enabled = True
except ImportError: # pragma: no cover
ozif_enabled = False
def main(options, session):
logger = logging.getLogger("koji.build")
logger.info('Starting up')
koji.util.setup_rlimits(options.__dict__, logger)
tm = TaskManager(options, session)
tm.findHandlers(globals())
tm.findHandlers(vars(koji.tasks))
if options.plugin:
# load plugins
pt = koji.plugin.PluginTracker(path=options.pluginpath.split(':'))
for name in options.plugin:
logger.info('Loading plugin: %s' % name)
tm.scanPlugin(pt.load(name))
def shutdown(*args):
raise SystemExit
def restart(*args):
logger.warning("Initiating graceful restart")
tm.restart_pending = True
signal.signal(signal.SIGTERM, shutdown)
signal.signal(signal.SIGUSR1, restart)
exit_code = 0
taken = False
while True:
try:
if not taken:
tm.updateBuildroots()
tm.updateTasks()
taken = tm.getNextTask()
except (SystemExit, ServerExit, KeyboardInterrupt):
logger.warning("Exiting")
break
except ServerRestart:
logger.warning("Restarting")
os.execv(sys.argv[0], sys.argv)
except koji.AuthExpired:
logger.error('Session expired')
exit_code = 1
break
except koji.RetryError:
raise
except koji.AuthError:
logger.error('Authentication error')
exit_code = 1
break
except Exception:
# XXX - this is a little extreme
# log the exception and continue
logger.error(''.join(traceback.format_exception(*sys.exc_info())))
taken = False
try:
if not taken:
# Only sleep if we didn't take a task, otherwise retry immediately.
# The load-balancing code in getNextTask() will prevent a single builder
# from getting overloaded.
logger.debug('Sleeping for %s', options.sleeptime)
time.sleep(options.sleeptime)
except (SystemExit, KeyboardInterrupt):
logger.warning("Exiting")
break
logger.warning("Shutting down, please wait...")
tm.shutdown()
session.logout()
sys.exit(exit_code)
class BuildRoot(object):
def __init__(self, session, options, *args, **kwargs):
self.logger = logging.getLogger("koji.build.buildroot")
self.session = session
self.options = options
self.logs = set()
if len(args) + len(kwargs) == 1:
# manage an existing mock buildroot
self._load(*args, **kwargs)
else:
self._new(*args, **kwargs)
def _load(self, data):
# manage an existing buildroot
if isinstance(data, dict):
# assume data already pulled from db
self.id = data['id']
else:
self.id = data
data = self.session.getBuildroot(self.id)
self.task_id = data['task_id']
self.tag_id = data['tag_id']
self.tag_name = data['tag_name']
self.repoid = data['repo_id']
self.repo_info = self.session.repoInfo(self.repoid, strict=True)
self.event_id = self.repo_info['create_event']
self.br_arch = data['arch']
self.name = "%(tag_name)s-%(id)s-%(repoid)s" % vars(self)
self.config = self.session.getBuildConfig(self.tag_id, event=self.event_id)
def _new(self, tag, arch, task_id, repo_id=None, install_group='build',
setup_dns=False, bind_opts=None, maven_opts=None, maven_envs=None,
deps=None, internal_dev_setup=None):
"""Create a brand new repo"""
if not repo_id:
raise koji.BuildrootError("A repo id must be provided")
repo_info = self.session.repoInfo(repo_id, strict=True)
self.repo_info = repo_info
self.repoid = self.repo_info['id']
self.event_id = self.repo_info['create_event']
self.task_id = task_id
self.config = self.session.getBuildConfig(tag, event=self.event_id)
if not self.config:
raise koji.BuildrootError("Could not get config info for tag: %s" % tag)
self.tag_id = self.config['id']
self.tag_name = self.config['name']
if self.config['id'] != repo_info['tag_id']:
raise koji.BuildrootError("tag/repo mismatch: %s vs %s"
% (self.config['name'], repo_info['tag_name']))
repo_state = koji.REPO_STATES[repo_info['state']]
if repo_state == 'EXPIRED':
# This should be ok. Expired repos are still intact, just not
# up-to-date (which may be the point in some cases).
self.logger.info("Requested repo (%i) is no longer current" % repo_id)
elif repo_state != 'READY':
raise koji.BuildrootError("Requested repo (%i) is %s" % (repo_id, repo_state))
self.br_arch = koji.canonArch(arch)
# armhfp is not a valid arch according to autoconf
if arch == 'armhfp':
self.target_arch = 'armv7hl'
else:
self.target_arch = arch
self.logger.debug("New buildroot: %(tag_name)s/%(br_arch)s/%(repoid)s" % vars(self))
id = self.session.host.newBuildRoot(self.repoid, self.br_arch, task_id=task_id)
if id is None:
raise koji.BuildrootError("failed to get a buildroot id")
self.id = id
self.name = "%(tag_name)s-%(id)s-%(repoid)s" % vars(self)
self.install_group = install_group
self.setup_dns = setup_dns
self.bind_opts = bind_opts
self.maven_opts = maven_opts
self.maven_envs = maven_envs
self.deps = deps
self.internal_dev_setup = internal_dev_setup
self._writeMockConfig()
def _writeMockConfig(self):
# mock config
configdir = '/etc/mock/koji'
configfile = "%s/%s.cfg" % (configdir, self.name)
self.mockcfg = "koji/%s" % self.name
opts = {}
for k in ('repoid', 'tag_name'):
if hasattr(self, k):
opts[k] = getattr(self, k)
for k in ('mockdir', 'topdir', 'topurl', 'topurls', 'packager', 'vendor',
'distribution', 'mockhost', 'yum_proxy', 'rpmbuild_timeout'):
if hasattr(self.options, k):
opts[k] = getattr(self.options, k)
opts['buildroot_id'] = self.id
if self.setup_dns:
opts['rpmbuild_networking'] = True
opts['use_host_resolv'] = self.setup_dns
opts['install_group'] = self.install_group
opts['maven_opts'] = self.maven_opts
opts['maven_envs'] = self.maven_envs
opts['bind_opts'] = self.bind_opts
opts['target_arch'] = self.target_arch
if 'mock.forcearch' in self.config['extra']:
if bool(self.config['extra']['mock.forcearch']):
opts['forcearch'] = self.target_arch
if 'mock.package_manager' in self.config['extra']:
opts['package_manager'] = self.config['extra']['mock.package_manager']
if 'mock.yum.module_hotfixes' in self.config['extra']:
opts['module_hotfixes'] = self.config['extra']['mock.yum.module_hotfixes']
if 'mock.yum.best' in self.config['extra']:
opts['yum_best'] = int(self.config['extra']['mock.yum.best'])
# Append opts['plugin_conf'] to enable Mock package signing
if 'mock.plugin_conf.sign_enable' in self.config['extra']:
# check rest of configuration
if ('mock.plugin_conf.sign_opts.cmd' not in self.config['extra'] or
'mock.plugin_conf.sign_opts.opts' not in self.config['extra']):
raise koji.GenericError("Tag is not configured properly for mock's sign plugin'")
opts['plugin_conf'] = {
'sign_enable': self.config['extra']['mock.plugin_conf.sign_enable'],
'sign_opts': {
'cmd': self.config['extra']['mock.plugin_conf.sign_opts.cmd'],
'opts': self.config['extra']['mock.plugin_conf.sign_opts.opts'],
}
}
if 'mock.module_setup_commands' in self.config['extra']:
opts['module_setup_commands'] = self.config['extra']['mock.module_setup_commands']
if 'mock.releasever' in self.config['extra']:
opts['releasever'] = self.config['extra']['mock.releasever']
if self.internal_dev_setup is not None:
opts['internal_dev_setup'] = bool(self.internal_dev_setup)
opts['tag_macros'] = {}
opts['tag_envvars'] = {}
for key in self.config['extra']:
if key.startswith('rpm.macro.'):
macro = '%' + key[10:]
opts['tag_macros'][macro] = self.config['extra'][key]
elif key.startswith('rpm.env.'):
opts['tag_envvars'][key[8:]] = self.config['extra'][key]
if 'mock.use_bootstrap' in self.config['extra']:
opts['use_bootstrap'] = bool(self.config['extra']['mock.use_bootstrap'])
# it must be allowed in kojid.conf *and* in tag's extra info
if (not self.options.mock_bootstrap_image and
self.config['extra'].get('mock.bootstrap_image')):
self.logger.warning("Mock bootstrap image requested by buildroot %d, "
"but forbidden on builder" % self.id)
opts['bootstrap_image'] = self.options.mock_bootstrap_image and \
self.config['extra'].get('mock.bootstrap_image')
output = koji.genMockConfig(self.name, self.br_arch, managed=True, **opts)
# write config
with koji._open_text_file(configfile, 'wt') as fo:
fo.write(output)
self.single_log(configfile, name='mock_config.log')
def get_repo_dir(self):
pathinfo = koji.PathInfo(topdir='')
return pathinfo.repo(self.repoid, self.tag_name)
def _repositoryEntries(self, pi, plugin=False):
entries = []
if plugin:
tag_name = 'pluginRepository'
else:
tag_name = 'repository'
id_suffix = 'repo'
name_prefix = 'Repository for Koji'
for dep in self.deps:
if isinstance(dep, six.integer_types):
# dep is a task ID, the url points to the task output directory
repo_type = 'task'
dep_url = pi.task(dep)
snapshots = 'true'
else:
# dep is a build NVR, the url points to the build output directory
repo_type = 'build'
build = koji.parse_NVR(dep)
dep_url = pi.mavenbuild(build)
snapshots = 'false'
repo_id = 'koji-%(repo_type)s-%(dep)s-%(id_suffix)s' % locals()
entry = """
<%(tag_name)s>
<id>%(repo_id)s</id>
<name>%(name_prefix)s %(repo_type)s %(dep)s</name>
<url>%(dep_url)s</url>
<layout>default</layout>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
<checksumPolicy>fail</checksumPolicy>
</releases>
<snapshots>
<enabled>%(snapshots)s</enabled>
<updatePolicy>never</updatePolicy>
<checksumPolicy>fail</checksumPolicy>
</snapshots>
</%(tag_name)s>""" % locals()
entries.append((repo_id, entry))
return entries
def writeMavenSettings(self, destfile, outputdir):
"""
Write the Maven settings.xml file to the specified destination.
"""
task_id = self.task_id
repo_id = self.repoid
tag_name = self.tag_name
deploy_dir = outputdir[len(self.rootdir()):]
pi = koji.PathInfo(topdir=self.options.topurl)
repourl = pi.repo(repo_id, tag_name) + '/maven'
mirror_spec = '*'
settings = """<settings xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/settings-1.0.0.xsd">
<interactiveMode>false</interactiveMode>
<mirrors>
<mirror>
<id>koji-maven-repo-%(tag_name)s-%(repo_id)i</id>
<name>Koji-managed Maven repository (%(tag_name)s-%(repo_id)i)</name>
<url>%(repourl)s</url>
<mirrorOf>%(mirror_spec)s</mirrorOf>
</mirror>
</mirrors>
<profiles>
<profile>
<id>koji-task-%(task_id)s</id>
<properties>
<altDeploymentRepository>koji-output::default::file://%(deploy_dir)s</altDeploymentRepository>
</properties>"""
if self.deps:
settings += """
<repositories>"""
for dep_repo_id, dep_repo_entry in self._repositoryEntries(pi):
mirror_spec += ',!' + dep_repo_id
settings += dep_repo_entry
settings += """
</repositories>
<pluginRepositories>"""
for dep_repo_id, dep_repo_entry in self._repositoryEntries(pi, plugin=True):
mirror_spec += ',!' + dep_repo_id
settings += dep_repo_entry
settings += """
</pluginRepositories>"""
settings += """
</profile>
</profiles>
<activeProfiles>
<activeProfile>koji-task-%(task_id)s</activeProfile>
</activeProfiles>
</settings>
"""
settings = settings % locals()
with koji._open_text_file(self.rootdir() + destfile, 'wt') as fo:
fo.write(settings)
def mock(self, args):
"""Run mock"""
mockpath = getattr(self.options, "mockpath", "/usr/bin/mock")
cmd = [mockpath, "-r", self.mockcfg]
# if self.options.debug_mock:
# cmd.append('--debug')
# TODO: should we pass something like --verbose --trace instead?
if 'mock.new_chroot' in self.config['extra']:
if self.config['extra']['mock.new_chroot']:
cmd.append('--new-chroot')
else:
cmd.append('--old-chroot')
cmd.extend(args)
self.logger.info(format_shell_cmd(cmd))
workdir = getattr(self, 'workdir', None)
mocklog = 'mock_output.log'
pid = os.fork()
if pid:
log_patterns = [
'%s/*.log' % self.resultdir(),
'%s/var/log/dnf*.log' % self.rootdir(),
]
if workdir:
log_patterns.append('%s/%s' % (workdir, mocklog))
logs = BuildRootLogs(self, log_patterns, with_ts=self.options.log_timestamps)
finished = False
while not finished:
time.sleep(1)
status = os.waitpid(pid, os.WNOHANG)
if status[0] != 0:
finished = True
logs.sync_logs()
# clean up and return exit status of command
logs.close_logs()
return status[1]
else:
# in no case should exceptions propagate past here
try:
self.session._forget()
if workdir:
outfile = os.path.join(workdir, mocklog)
flags = os.O_CREAT | os.O_WRONLY | os.O_APPEND
fd = os.open(outfile, flags, 0o666)
os.dup2(fd, 1)
os.dup2(fd, 2)
if os.getuid() == 0 and hasattr(self.options, "mockuser"):
self.logger.info('Running mock as %s' % self.options.mockuser)
uid, gid = pwd.getpwnam(self.options.mockuser)[2:4]
os.setgroups([grp.getgrnam('mock')[2]])
os.setregid(gid, gid)
os.setreuid(uid, uid)
os.execvp(cmd[0], cmd)
except BaseException:
# diediedie
print("Failed to exec mock")
print(''.join(traceback.format_exception(*sys.exc_info())))
os._exit(1)
def getUploadPath(self):
"""Get the path that should be used when uploading files to
the hub."""
return koji.pathinfo.taskrelpath(self.task_id)
def incremental_log(self, fname, fd):
ret = incremental_upload(self.session, fname, fd, self.getUploadPath(), logger=self.logger)
self.logs.add(fname)
return ret
def single_log(self, localfile, name=None):
if name is None:
name = os.path.basename(localfile)
self.session.uploadWrapper(localfile, self.getUploadPath(), name=name)
self.logs.add(name)
def init(self):
rv = self.mock(['--init'])
if rv:
self.expire()
raise koji.BuildrootError("could not init mock buildroot, %s" % self._mockResult(rv))
# log kernel version
self.mock(['--chroot', 'uname -r'])
self.session.host.setBuildRootList(self.id, self.getPackageList())
def _mockResult(self, rv, logfile=None):
if logfile:
pass
elif os.WIFEXITED(rv) and os.WEXITSTATUS(rv) == 10:
logfile = 'build.log'
elif os.WIFEXITED(rv) and os.WEXITSTATUS(rv) == 1:
logfile = 'build.log or root.log'
else:
logfile = 'root.log'
msg = '; see %s for more information' % logfile
return parseStatus(rv, 'mock') + msg
def rebuild_srpm(self, srpm):
self.session.host.setBuildRootState(self.id, 'BUILDING')
# unpack SRPM to tempdir
srpm_dir = os.path.join(self.tmpdir(), 'srpm_unpacked')
koji.ensuredir(srpm_dir)
top_dir = self.path_without_to_within(srpm_dir)
args = ['--no-clean', '--target', 'noarch', '--chroot', '--',
'rpm', '--define', '_topdir %s' % top_dir, '-iv', srpm]
rv = self.mock(args)
# find specfile
spec_files = glob.glob("%s/SPECS/*.spec" % srpm_dir)
if len(spec_files) == 0:
raise koji.BuildError("No spec file found")
elif len(spec_files) > 1:
raise koji.BuildError("Multiple spec files found: %s" % spec_files)
spec_file = os.path.join(top_dir, "SPECS", os.path.basename(spec_files[0]))
# rebuild SRPM from spec + sources
args = ['--no-clean', '--target', 'noarch', '--chroot', '--',
'rpmbuild', '--define', '_topdir %s' % top_dir, '-bs', '--nodeps', spec_file]
rv = self.mock(args)
result_dir = os.path.join(srpm_dir, 'SRPMS')
for fn in glob.glob('%s/*.src.rpm' % result_dir):
shutil.move(os.path.join(result_dir, fn), self.resultdir())
if rv:
self.expire()
raise koji.BuildError("error building srpm, %s" % self._mockResult(rv))
def build_srpm(self, specfile, sourcedir, source_cmd):
self.session.host.setBuildRootState(self.id, 'BUILDING')
if source_cmd:
# call the command defined by source_cmd in the chroot so any required files not stored
# in the SCM can be retrieved
chroot_sourcedir = sourcedir[len(self.rootdir()):]
args = ['--no-clean', '--unpriv', '--cwd', chroot_sourcedir, '--chroot']
args.extend(source_cmd)
rv = self.mock(args)
if rv:
self.expire()
raise koji.BuildError("error retrieving sources, %s" % self._mockResult(rv))
alt_sources_dir = "%s/SOURCES" % sourcedir
if self.options.support_rpm_source_layout and os.path.isdir(alt_sources_dir):
sources_dir = alt_sources_dir
else:
sources_dir = sourcedir
args = ['--no-clean', '--buildsrpm', '--spec', specfile, '--sources', sources_dir,
'--target', 'noarch']
rv = self.mock(args)
if rv:
self.expire()
raise koji.BuildError("error building srpm, %s" % self._mockResult(rv))
def build(self, srpm, arch=None):
# run build
self.session.host.setBuildRootState(self.id, 'BUILDING')
args = ['--no-clean']
if arch:
args.extend(['--target', arch])
args.extend(['--rebuild', srpm])
rv = self.mock(args)
self.session.host.updateBuildRootList(self.id, self.getPackageList())
if rv:
self.expire()
raise koji.BuildError("error building package (arch %s), %s" %
(arch, self._mockResult(rv)))
def getPackageList(self):
"""Return a list of packages from the buildroot
Each member of the list is a dictionary containing the following fields:
- id, optional for internal rpm available in rpmlist.jsonl
- name
- version
- release
- epoch
- arch
- payloadhash
- size
- buildtime
- external_repo, optional for external rpm
- location, optional for external rpm
"""
fields = ('name',
'version',
'release',
'epoch',
'arch',
'sigmd5',
'size',
'buildtime')
# Determine db path
dbpath = "%s/usr/lib/sysimage/rpm" % self.rootdir()
if not os.path.exists(dbpath):
# older variant (<rhel10, <fedora39)
dbpath = "%s/var/lib/rpm" % self.rootdir()
else:
# if new dbpath is used and mock is running older rpm on builder
# _db* symlinks could exist in migrated directory
for f in glob.glob('%s/__db*' % dbpath):
if not os.path.isfile(f):
os.unlink(f)
if not os.path.exists(dbpath):
raise koji.GenericError("Can't get list of installed rpms")
rpm.addMacro("_dbpath", dbpath)
ret = []
try:
ts = rpm.TransactionSet()
for h in ts.dbMatch():
pkg = koji.get_header_fields(h, fields)
# skip our fake packages
if pkg['name'] in ['buildsys-build', 'gpg-pubkey']:
# XXX config
continue
pkg['payloadhash'] = koji.hex_string(pkg['sigmd5'])
del pkg['sigmd5']
ret.append(pkg)
finally:
rpm.delMacro("_dbpath")
self.markExternalRPMs(ret)
self.mapInternalRPMs(ret)
return ret
def getMavenPackageList(self, repodir):
"""Return a list of Maven packages that were installed into the local repo
to satisfy build requirements.
Each member of the list is a dictionary containing the following fields:
- maven_info: a dict of Maven info containing the groupId, artifactId, and version fields
- files: a list of files associated with that POM
"""
packages = []
for path, dirs, files in os.walk(repodir):
relpath = path[len(repodir) + 1:]
maven_files = []
for repofile in files:
if koji.util.multi_fnmatch(repofile, self.options.maven_repo_ignore) or \
koji.util.multi_fnmatch(os.path.join(relpath, repofile),
self.options.maven_repo_ignore):
continue
if relpath == '' and repofile in ['scm-sources.zip', 'patches.zip']:
# special-case the archives of the sources and patches, since we drop them in
# root of the output directory
continue
maven_files.append({'path': relpath, 'filename': repofile,
'size': os.path.getsize(os.path.join(path, repofile))})
if maven_files:
path_comps = relpath.split('/')
if len(path_comps) < 3:
raise koji.BuildrootError('files found in unexpected path in local Maven repo,'
' directory: %s, files: %s' %
(relpath,
', '.join([f['filename'] for f in maven_files])))
# extract the Maven info from the path within the local repo
maven_info = {'version': path_comps[-1],
'artifact_id': path_comps[-2],
'group_id': '.'.join(path_comps[:-2])}
packages.append({'maven_info': maven_info, 'files': maven_files})
return packages
def mavenBuild(self, sourcedir, outputdir, repodir,
props=None, profiles=None, options=None, goals=None):
self.session.host.setBuildRootState(self.id, 'BUILDING')
cmd = ['--no-clean', '--chroot', '--unpriv', '--cwd', sourcedir[len(self.rootdir()):],
'--', '/usr/bin/mvn', '-C']
if options:
cmd.extend(options)
if profiles:
cmd.append('-P%s' % ','.join(profiles))
if props:
for name, value in props.items():
if value is not None:
cmd.append('-D%s=%s' % (name, value))
else:
cmd.append('-D%s' % name)
if goals:
cmd.extend(goals)
cmd.extend(['deploy'])
rv = self.mock(cmd)
# if the deploy command failed, don't raise an error on unknown artifacts, because that
# will mask the underlying failure
ignore_unknown = False
if rv:
ignore_unknown = True
self.session.host.updateMavenBuildRootList(self.id, self.task_id,
self.getMavenPackageList(repodir),
ignore=self.getMavenPackageList(outputdir),
project=True, ignore_unknown=ignore_unknown,
extra_deps=self.deps)
if rv:
self.expire()
raise koji.BuildrootError('error building Maven package, %s' %
self._mockResult(rv, logfile='root.log'))
def markExternalRPMs(self, rpmlist):
"""Check rpms against pkgorigins and add external repo data to the external ones
Modifies rpmlist in place. No return
"""
external_repos = self.session.getExternalRepoList(self.repo_info['tag_id'],
event=self.repo_info['create_event'])
if not external_repos:
# nothing to do
return
# index external repos by expanded url
erepo_idx = {}
for erepo in external_repos:
# substitute $arch in the url with the arch of the repo we're generating
ext_url = erepo['url'].replace('$arch', self.br_arch)
erepo_idx[ext_url] = erepo
opts = dict([(k, getattr(self.options, k)) for k in ('topurl', 'topdir')])
opts['tempdir'] = self.options.workdir
repo_url = os.path.join(self.get_repo_dir(), self.br_arch)
# repo_url can start with '/', don't use os.path.join
if self.options.topurl:
repo_url = '%s/%s' % (self.options.topurl, repo_url)
elif self.options.topdir:
repo_url = '%s/%s' % (self.options.topdir, repo_url)
self.logger.info("repo url of buildroot: %s is %s", self.name, repo_url)
tmpdir = os.path.join(self.tmpdir(), 'librepo-markExternalRPMs')
koji.ensuredir(tmpdir)
h = librepo.Handle()
r = librepo.Result()
h.setopt(librepo.LRO_REPOTYPE, librepo.LR_YUMREPO)
h.setopt(librepo.LRO_URLS, [repo_url])
h.setopt(librepo.LRO_DESTDIR, tmpdir)
# We are using this just to find out location of 'origin',
# we don't even need to download it since we use openRemoteFile
h.setopt(librepo.LRO_YUMDLIST, [])
h.perform(r)
pkgorigins = r.getinfo(librepo.LRR_YUM_REPOMD)['origin']['location_href']
koji.util.rmtree(tmpdir)
relpath = os.path.join(self.get_repo_dir(), self.br_arch, pkgorigins)
with koji.openRemoteFile(relpath, **opts) as fo:
# at this point we know there were external repos at the create event,
# so there should be an origins file.
origin_idx = {}
with GzipFile(fileobj=fo, mode='r') as fo2:
if six.PY3:
fo2 = io.TextIOWrapper(fo2, encoding='utf-8')
for line in fo2:
parts = line.split(None, 2)
if len(parts) < 2:
continue
# first field is formated by yum as [e:]n-v-r.a
nvra = "%(name)s-%(version)s-%(release)s.%(arch)s" % koji.parse_NVRA(parts[0])
origin_idx[nvra] = parts[1]
# mergerepo starts from a local repo in the task workdir, so internal
# rpms have an odd-looking origin that we need to look for
localtail = '/repo_%s_premerge/' % self.repo_info['id']
for rpm_info in rpmlist:
key = "%(name)s-%(version)s-%(release)s.%(arch)s" % rpm_info
# src rpms should not show up in rpmlist so we do not have to
# worry about fixing the arch for them
ext_url = origin_idx.get(key)
if not ext_url:
raise koji.BuildError("No origin for %s" % key)
erepo = erepo_idx.get(ext_url)
if not erepo:
if ext_url.startswith('file://') and ext_url.endswith(localtail):
# internal rpm
continue
raise koji.BuildError("Unknown origin for %s: %s" % (key, ext_url))
rpm_info['external_repo'] = erepo
rpm_info['location'] = erepo['external_repo_id']
def mapInternalRPMs(self, rpmlist):
"""
Map each rpm item of rpmlist to a specific koji rpm entry based on repo contents
The rpmList should be a list of dicts containing rpm header values. These entries will be
modified in place to include an id field when mapped.
This mapping relies on the rpmlist.jsonl file for the repo. If this file is missing, the
code will fall back to querying the hub.
This function will raise an error if there is a sigmd5 mismatch for a given rpm.
:param list rpmlist: rpm list fetched from local RPMDB.
:return: None
"""
opts = dict([(k, getattr(self.options, k)) for k in ('topurl', 'topdir')])
rpmlist_path = os.path.join(self.get_repo_dir(), self.br_arch, 'rpmlist.jsonl')
compat_mode = False
try:
with koji.openRemoteFile(rpmlist_path, **opts) as fo:
repo_rpms = [json.loads(line) for line in fo]
except requests.exceptions.HTTPError as e:
if e.response.status_code == 404:
self.logger.warning("Missing repo content file: %s", rpmlist_path)
# TODO: remove this workaround once we can assume that repos contain this file
repo_rpms = self.repo_draft_rpms()
compat_mode = True
else:
raise
fmt = "%(name)s-%(version)s-%(release)s.%(arch)s"
repo_rpms = {fmt % r: r for r in repo_rpms}
for rpm_info in rpmlist:
if 'external_repo' in rpm_info:
continue
nvra = fmt % rpm_info
data = repo_rpms.get(nvra)
if not data:
# happens a lot in compat mode because we only query for drafts
if not compat_mode:
self.logger.warning("%s not found in rpmlist.jsonl", nvra)
continue
# check payloadhash in case they are different
elif data['payloadhash'] != rpm_info['payloadhash']:
raise koji.BuildrootError(
"RPM: %s: payloadhash: %s mismatch expected %s in rpmlist.jsonl"
% (nvra, rpm_info['payloadhash'], data['payloadhash'])
)
else:
# set rpm id
rpm_info['id'] = data['id']
def repo_draft_rpms(self):
drafts, draftbuilds = self.session.listTaggedRPMS(
tag=self.repo_info['tag_id'],
event=self.repo_info['create_event'],
latest=True,
draft=True)
return drafts
def path_without_to_within(self, path):
"""
Convert an absolute path from without the BuildRoot to one within.
For example, if the BuildRoot is located at '/tmp/my/build/root',
calling path_without_to_within('/tmp/my/build/root/foo/bar') would
return '/foo/bar').
:param path:
A reference within the BuildRoot but as an absolute path from
without a chroot.
:return:
The equivalent absolute path from within a chroot of the BuildRoot.
"""
root = self.rootdir()
if os.path.commonprefix([root, path]) != root:
raise ValueError(
'path %r is not within the BuildRoot at %r' % (path, root)
)
return os.path.join('/', os.path.relpath(path, root))
def resultdir(self):
return "%s/%s/result" % (self.options.mockdir, self.name)
def rootdir(self):
return "%s/%s/root" % (self.options.mockdir, self.name)
def tmpdir(self, within=False):
# mock 1.4+ /tmp is tmpfs mounted on each run, different
# directory is needed for persistency
# 'within' is equivalent to broot.path_without_to_within(broot.tmpdir())
base = self.options.chroot_tmpdir
if within:
return base
else:
return "%s%s" % (self.rootdir(), base)
def expire(self):
self.session.host.setBuildRootState(self.id, 'EXPIRED')
class BuildRootLogs(object):
"Track the logs generated during a mock run"
def __init__(self, broot, patterns, with_ts=False):
self.broot = broot
self.patterns = patterns
self.with_ts = with_ts
self.loginfo = {}
self.ts_logs = {}
self.ignored = set()
self.names = {}
self.logger = broot.logger
self.workdir = getattr(broot, 'workdir', None)
if with_ts and self.workdir is None:
self.logger.error('No workdir defined -- disabling log timestamps')
self.with_ts = False
def find_logs(self):
matches = []
for pattern in self.patterns:
m = glob.glob(pattern)
for path in m:
if path not in self.loginfo:
self.logger.debug('Log matched pattern %r: %s', pattern, path)
matches.append(path)
return matches
def add_log(self, path):
if path in self.loginfo or path in self.ignored:
return
if path.endswith('-ts.log'):
self.logger.error('ignoring stray ts log: %s', path)
self.ignored.add(path)
return
# pick a unique name for upload if there is overlap
fname = os.path.basename(path)
if fname in self.names:
base, ext = os.path.splitext(fname)
for n in range(99):
fname = '%s.DUP%02i%s' % (base, n, ext)
if fname not in self.names:
self.logger.debug('Using log name alias %s for %s', fname, path)
break
else:
self.logger.error('Unable to find unique log name for %s', path)
self.ignored.add(path)
return
info = {'name': fname, 'path': path}
self.names[fname] = info
self.loginfo[path] = info
self.logger.debug('Watching buildroot log: %r', info)
if self.with_ts:
self.add_ts_log(info)
def add_ts_log(self, info):
ts_name = '%(name)s-ts.log' % info
ts_path = os.path.join(self.workdir, ts_name)
offset = 0
if os.path.exists(ts_path):
# XXX should this even happen?
# read last offset from existing ts file
with koji._open_text_file(ts_path) as ts_file:
lines = ts_file.readlines()
if lines:
offset = int(lines[-1].split()[1])
else:
# initialize ts file at zero
with koji._open_text_file(ts_path, 'at') as ts_file:
ts_file.write('%.0f 0\n' % time.time())
info['offset'] = offset
info['ts_log'] = ts_path
self.ts_logs[ts_path] = {'name': ts_name, 'path': ts_path, 'ts': True}
self.logger.debug('Watching timestamp log: %r', info)
def get_logs(self):
for info in self.loginfo.values():
yield info
for info in self.ts_logs.values():
yield info
def sync_logs(self):
paths = self.find_logs()
for fpath in paths:
self.add_log(fpath)
for info in self.get_logs():
# note that the ts logs are listed last
try:
self.sync_log(info)
except OSError:
self.logger.error("Error reading mock log: %(path)s", info)
self.logger.error(''.join(traceback.format_exception(*sys.exc_info())))
continue
def sync_log(self, info):
fpath = info['path']
try:
st = os.stat(fpath)
except OSError as e:
if e.errno == errno.ENOENT:
if info.get('missing'):
# we've already noted this, don't spam the logs
return
self.logger.error('Log disappeared: %(path)s', info)
info['missing'] = True
return
raise
if info.get('missing'):
self.logger.error('Log re-appeared: %(path)s', info)
del info['missing']
fd = info.get('fd')
if fd is None:
# freshly added, we need to open it
fd = open(fpath, 'rb')
info['fd'] = fd
last_st = info.get('st')
if last_st:
if st.st_ino != last_st.st_ino or st.st_size < last_st.st_size:
# file appears to have been rewritten or truncated
self.logger.info('Rereading %s, inode: %s -> %s, size: %s -> %s',
fpath, last_st.st_ino, st.st_ino, last_st.st_size, st.st_size)
fd.close()
fd = open(fpath, 'rb')
info['fd'] = fd
info['st'] = st
self.broot.incremental_log(info['name'], fd)
ts_log = info.get('ts_log')
if ts_log and self.with_ts:
# race condition against incremental_upload's tell,
# but with enough precision for ts.log purposes
position = fd.tell()
info.setdefault('offset', 0)
if info['offset'] < position:
with koji._open_text_file(ts_log, 'at') as ts_log:
ts_log.write('%.0f %i\n' % (time.time(), position))
info['offset'] = position
def close_logs(self):
for info in self.get_logs():
fd = info.get('fd')
if fd:
fd.close()
class ChainBuildTask(BaseTaskHandler):
Methods = ['chainbuild']
# mostly just waiting on other tasks
_taskWeight = 0.1
def handler(self, srcs, target, opts=None):
"""Run a chain build
target and opts are passed on to the build tasks
srcs is a list of "build levels"
each build level is a list of strings, each string may be one of:
- a build src (SCM url only)
- an n-v-r
each build level is processed in order
successive levels are only started once the previous levels have completed
and gotten into the repo.
"""
if opts.get('scratch'):
raise koji.BuildError("--scratch is not allowed with chain-builds")
target_info = self.session.getBuildTarget(target)
if not target_info:
raise koji.GenericError('unknown build target: %s' % target)
nvrs = []
for n_level, build_level in enumerate(srcs):
# if there are any nvrs to wait on, do so
if nvrs:
task_id = self.session.host.subtask(method='waitrepo',
arglist=[
target_info['build_tag_name'], None, nvrs],
label="wait %i" % n_level,
parent=self.id)
self.wait(task_id, all=True, failany=True)
nvrs = []
# kick off the builds for this level
build_tasks = []
for n_src, src in enumerate(build_level):
if SCM.is_scm_url(src):
task_id = self.session.host.subtask(method='build',
arglist=[src, target, opts],
label="build %i,%i" % (n_level, n_src),
parent=self.id)
build_tasks.append(task_id)
else:
nvrs.append(src)
# next pass will wait for these
if build_tasks:
# the level could have been all nvrs
self.wait(build_tasks, all=True, failany=True)
# see what builds we created in this batch so the next pass can wait for them also
for build_task in build_tasks:
builds = self.session.listBuilds(taskID=build_task)
if builds:
nvrs.append(builds[0]['nvr'])
class BuildTask(BaseTaskHandler):
Methods = ['build']
# we mostly just wait on other tasks
_taskWeight = 0.2
def handler(self, src, target, opts=None):
"""Handler for the master build task"""
if opts is None:
opts = {}
self.opts = opts
if opts.get('arch_override') and not opts.get('scratch'):
raise koji.BuildError("arch_override is only allowed for scratch builds")
if opts.get('repo_id') is not None:
repo_info = self.session.repoInfo(opts['repo_id'])
if not repo_info:
raise koji.BuildError('No such repo: %s' % opts['repo_id'])
repo_state = koji.REPO_STATES[repo_info['state']]
if repo_state not in ('READY', 'EXPIRED'):
raise koji.BuildError('Bad repo: %s (%s)' % (repo_info['id'], repo_state))
self.event_id = repo_info['create_event']
else:
repo_info = None
# we'll wait for a repo later (self.getRepo)
self.event_id = None
if opts.get('custom_user_metadata'):
if not isinstance(opts['custom_user_metadata'], dict):
raise koji.BuildError('custom_user_metadata must be serializable to a JSON object')
try:
json.dumps(opts['custom_user_metadata'])
except TypeError:
error_msg = 'custom_user_metadata is not JSON serializable'
self.logger.exception(error_msg)
raise koji.BuildError(error_msg)
task_info = self.session.getTaskInfo(self.id)
target_info = None
if target:
target_info = self.session.getBuildTarget(target, event=self.event_id)
if target_info:
dest_tag = target_info['dest_tag']
build_tag = target_info['build_tag']
if repo_info is not None:
# make sure specified repo matches target
if repo_info['tag_id'] != target_info['build_tag']:
raise koji.BuildError('Repo/Target mismatch: %s/%s'
% (repo_info['tag_name'], target_info['build_tag_name']))
else:
# if repo_id is specified, we can allow the 'target' arg to simply specify
# the destination tag (since the repo specifies the build tag).
if repo_info is None:
raise koji.GenericError('unknown build target: %s' % target)
build_tag = repo_info['tag_id']
if target is None:
# ok, call it skip-tag for the buildroot tag
self.opts['skip_tag'] = True
dest_tag = build_tag
else:
taginfo = self.session.getTag(target, event=self.event_id)
if not taginfo:
raise koji.GenericError('neither tag nor target: %s' % target)
dest_tag = taginfo['id']
# policy checks...
policy_data = {
'user_id': task_info['owner'],
'source': src,
'task_id': self.id,
'build_tag': build_tag, # id
'skip_tag': bool(self.opts.get('skip_tag')),
'scratch': opts.get('scratch'),
'draft': opts.get('draft'),
'from_scm': SCM.is_scm_url(src),
'repo_id': opts.get('repo_id'),
}
if target_info:
policy_data['target'] = target_info['name']
if not self.opts.get('skip_tag'):
policy_data['tag'] = dest_tag # id
# backward-compatible deprecated policies (TODO: remove if py2 is dropped - rhel6 builders)
if not SCM.is_scm_url(src) and not opts.get('scratch'):
# let hub policy decide
self.session.host.assertPolicy('build_from_srpm', policy_data)
if opts.get('repo_id') is not None:
# use of this option is governed by policy
self.session.host.assertPolicy('build_from_repo_id', policy_data)
self.session.host.assertPolicy('build_rpm', policy_data)
if not repo_info:
repo_info = self.getRepo(build_tag, builds=opts.get('wait_builds'),
wait=opts.get('wait_repo')) # (subtask)
self.event_id = self.session.getLastEvent()['id']
srpm = self.getSRPM(src, build_tag, repo_info['id'])
h = self.readSRPMHeader(srpm)
data = koji.get_header_fields(h, ['name', 'version', 'release', 'epoch'])
data['task_id'] = self.id
if getattr(self, 'source', False):
data['source'] = self.source['source']
data['extra'] = {'source': {'original_url': self.source['url']}}
if opts.get('custom_user_metadata'):
data.setdefault('extra', {})
data['extra']['custom_user_metadata'] = opts['custom_user_metadata']
extra_arches = None
self.logger.info("Reading package config for %(name)s" % data)
pkg_cfg = self.session.getPackageConfig(dest_tag, data['name'], event=self.event_id)
self.logger.debug("%r" % pkg_cfg)
if pkg_cfg is not None:
extra_arches = pkg_cfg.get('extra_arches')
if not self.opts.get('skip_tag') and not self.opts.get('scratch'):
# Make sure package is on the list for this tag
if pkg_cfg is None:
raise koji.BuildError("package %s not in list for tag %s"
% (data['name'], target_info['dest_tag_name']))
elif pkg_cfg['blocked']:
raise koji.BuildError("package %s is blocked for tag %s"
% (data['name'], target_info['dest_tag_name']))
# TODO - more pre tests
archlist = self.getArchList(build_tag, h, extra=extra_arches)
# pass draft option in
if opts.get('draft'):
data['draft'] = opts.get('draft')
# let the system know about the build we're attempting
if not self.opts.get('scratch'):
# scratch builds do not get imported
build_id = self.session.host.initBuild(data)
# (initBuild raises an exception if there is a conflict)
failany = (self.opts.get('fail_fast', False) or
not getattr(self.options, 'build_arch_can_fail', False))
try:
self.extra_information = {"src": src, "data": data, "target": target}
srpm, rpms, brmap, logs = self.runBuilds(srpm, build_tag, archlist,
repo_info['id'], failany=failany)
if opts.get('scratch'):
# scratch builds do not get imported
self.session.host.moveBuildToScratch(self.id, srpm, rpms, logs=logs)
else:
self.session.host.completeBuild(self.id, build_id, srpm, rpms, brmap, logs=logs)
except (SystemExit, ServerExit, KeyboardInterrupt):
# we do not trap these
raise
except Exception:
if not self.opts.get('scratch'):
# scratch builds do not get imported
self.session.host.failBuild(self.id, build_id)
# reraise the exception
raise
if not self.opts.get('skip_tag') and not self.opts.get('scratch'):
self.tagBuild(build_id, dest_tag)
def getSRPM(self, src, build_tag, repo_id):
"""
Get srpm from src - it can fetch SCM and build SRPM from there or
alternatively get downloaded SRPM (and rebuild it if needed).
Buildroot has extra.rebuild_srpm field with default value of True. For
scratch builds we allow overriding by user, for regular builds only
buildtag's option can affect it.
:param str src:
SCM url or filename
:param str|dict build_tag:
build tag used for re/building srpm
:param int repo_id:
repo id to be used
"""
if isinstance(src, str):
if SCM.is_scm_url(src):
return self.getSRPMFromSCM(src, build_tag, repo_id)
else:
buildconfig = self.session.getBuildConfig(build_tag, event=self.event_id)
rebuild = buildconfig['extra'].get('rebuild_srpm', True)
if self.opts.get('scratch') and self.opts.get('rebuild_srpm') is not None:
rebuild = self.opts.get('rebuild_srpm')
if rebuild:
return self.getSRPMFromSRPM(src, build_tag, repo_id)
else:
return src
else:
raise koji.BuildError('Invalid source specification: %s' % src)
# XXX - other methods?
def getSRPMFromSRPM(self, src, build_tag, repo_id):
# rebuild srpm in mock, so it gets correct disttag, rpm version, etc.
taskarch = self.choose_taskarch('noarch', None, build_tag)
task_id = self.session.host.subtask(method='rebuildSRPM',
arglist=[src, build_tag, {
'repo_id': repo_id,
'scratch': self.opts.get('scratch')}],
label='srpm',
arch=taskarch,
parent=self.id)
# wait for subtask to finish
result = self.wait(task_id)[task_id]
if 'source' in result:
self.source = result['source']
else:
self.logger.warning('subtask did not provide source data')
srpm = result['srpm']
return srpm
def getSRPMFromSCM(self, url, build_tag, repo_id):
# TODO - allow different ways to get the srpm
taskarch = self.choose_taskarch('noarch', None, build_tag)
task_id = self.session.host.subtask(method='buildSRPMFromSCM',
arglist=[url, build_tag, {
'repo_id': repo_id,
'scratch': self.opts.get('scratch')}],
label='srpm',
arch=taskarch,
parent=self.id)
# wait for subtask to finish
result = self.wait(task_id)[task_id]
if 'source' in result:
self.source = result['source']
else:
self.logger.warning('subtask did not provide source data')
srpm = result['srpm']
return srpm
def readSRPMHeader(self, srpm):
# srpm arg should be a path relative to <BASEDIR>/work
self.logger.debug("Reading SRPM")
relpath = "work/%s" % srpm
opts = dict([(k, getattr(self.options, k)) for k in ('topurl', 'topdir')])
opts['tempdir'] = self.workdir
with koji.openRemoteFile(relpath, **opts) as fo:
h = koji.get_rpm_header(fo)
if not koji.get_header_field(h, 'sourcepackage'):
raise koji.BuildError("%s is not a source package" % srpm)
return h
def getArchList(self, build_tag, h, extra=None):
# get list of arches to build for
buildconfig = self.session.getBuildConfig(build_tag, event=self.event_id)
arches = buildconfig['arches']
if not arches:
# XXX - need to handle this better
raise koji.BuildError("No arches for tag %(name)s [%(id)s]" % buildconfig)
tag_archlist = [koji.canonArch(a) for a in arches.split()]
self.logger.debug('arches: %s' % arches)
if extra:
self.logger.debug('Got extra arches: %s' % extra)
arches = "%s %s" % (arches, extra)
archlist = arches.split()
self.logger.debug('base archlist: %r' % archlist)
# - adjust arch list based on srpm macros
buildarchs = koji.get_header_field(h, 'buildarchs')
exclusivearch = koji.get_header_field(h, 'exclusivearch')
excludearch = koji.get_header_field(h, 'excludearch')
if buildarchs:
archlist = buildarchs
self.logger.debug('archlist after buildarchs: %r' % archlist)
if exclusivearch:
archlist = [a for a in archlist if a in exclusivearch]
self.logger.debug('archlist after exclusivearch: %r' % archlist)
if excludearch:
archlist = [a for a in archlist if a not in excludearch]
self.logger.debug('archlist after excludearch: %r' % archlist)
# noarch is funny
if 'noarch' not in excludearch and \
('noarch' in buildarchs or 'noarch' in exclusivearch):
archlist.append('noarch')
override = self.opts.get('arch_override')
if self.opts.get('scratch') and override:
# only honor override for scratch builds
self.logger.debug('arch override: %s' % override)
archlist = override.split()
archdict = {}
for a in archlist:
# Filter based on canonical arches for tag
# This prevents building for an arch that we can't handle
if a == 'noarch' or koji.canonArch(a) in tag_archlist:
archdict[a] = 1
if not archdict:
raise koji.BuildError("No matching arches were found")
return to_list(archdict.keys())
def choose_taskarch(self, arch, srpm, build_tag):
"""Adjust the arch for buildArch subtask as needed"""
if koji.util.multi_fnmatch(arch, self.options.literal_task_arches):
return arch
if arch != 'noarch':
return koji.canonArch(arch)
# For noarch, attempt to honor ExcludeArch/ExclusiveArch
# see https://pagure.io/koji/issue/19
if srpm is None:
exclusivearch = []
excludearch = []
else:
h = self.readSRPMHeader(srpm)
exclusivearch = koji.get_header_field(h, 'exclusivearch')
excludearch = koji.get_header_field(h, 'excludearch')
buildconfig = self.session.getBuildConfig(build_tag, event=self.event_id)
noarch_arches = buildconfig.get('extra', {}).get('noarch_arches')
if exclusivearch or excludearch or noarch_arches:
# if one of the tag arches is filtered out, then we can't use a
# noarch task
arches = buildconfig['arches']
tag_arches = [koji.canonArch(a) for a in arches.split()]
exclusivearch = [koji.canonArch(a) for a in exclusivearch]
excludearch = [koji.canonArch(a) for a in excludearch]
# tag.extra overrides tag arches for noarch
if noarch_arches:
archlist = [koji.canonArch(a) for a in noarch_arches.split()]
archlist = [a for a in archlist if a in tag_arches]
else:
archlist = list(tag_arches)
if exclusivearch:
archlist = [a for a in archlist if a in exclusivearch]
if excludearch:
archlist = [a for a in archlist if a not in excludearch]
self.logger.info('Filtering arches for noarch subtask. Choices: %r', archlist)
if not archlist:
raise koji.BuildError("No valid arches were found. tag %r, extra %r,"
"exclusive %r, exclude %r" % (tag_arches, noarch_arches,
exclusivearch, excludearch))
self.logger.debug('tag: %r, extra: %r, exclusive: %r, exclude: %r',
tag_arches, noarch_arches, exclusivearch, excludearch)
if set(archlist) != set(tag_arches):
return random.choice(archlist)
else:
# noarch is ok
return 'noarch'
# otherwise, noarch is ok
return 'noarch'
def runBuilds(self, srpm, build_tag, archlist, repo_id, failany=True):
self.logger.debug("Spawning jobs for arches: %r" % (archlist))
subtasks = {}
keep_srpm = True
for arch in archlist:
taskarch = self.choose_taskarch(arch, srpm, build_tag)
subtasks[arch] = self.session.host.subtask(method='buildArch',
arglist=[srpm, build_tag, arch,
keep_srpm, {'repo_id': repo_id}],
label=arch,
parent=self.id,
arch=taskarch)
keep_srpm = False
self.logger.debug("Got subtasks: %r" % (subtasks))
self.logger.debug("Waiting on subtasks...")
# wait for subtasks to finish
results = self.wait(to_list(subtasks.values()), all=True, failany=failany)
# finalize import
# merge data into needed args for completeBuild call
rpms = []
brmap = {}
logs = {}
built_srpm = None
for (arch, task_id) in six.iteritems(subtasks):
result = results[task_id]
self.logger.debug("DEBUG: %r : %r " % (arch, result,))
brootid = result['brootid']
for fn in result['rpms']:
rpms.append(fn)
brmap[fn] = brootid
for fn in result['logs']:
logs.setdefault(arch, []).append(fn)
if result['srpms']:
if built_srpm:
raise koji.BuildError("multiple builds returned a srpm. task %i" % self.id)
else:
built_srpm = result['srpms'][0]
brmap[result['srpms'][0]] = brootid
if built_srpm:
srpm = built_srpm
else:
raise koji.BuildError("could not find a built srpm")
return srpm, rpms, brmap, logs
def tagBuild(self, build_id, dest_tag):
# XXX - need options to skip tagging and to force tagging
# create the tagBuild subtask
# this will handle the "post tests"
task_id = self.session.host.subtask(method='tagBuild',
arglist=[dest_tag, build_id, False, None, True],
label='tag',
parent=self.id,
arch='noarch')
self.wait(task_id)
class BaseBuildTask(BaseTaskHandler):
"""Base class for tasks the create a build root"""
def checkHostArch(self, tag, hostdata, event=None):
tagref = tag
if isinstance(tag, dict):
tagref = tag.get('id') or tag.get('name')
opts = {}
if event is not None:
opts['event'] = event
tag = self.session.getBuildConfig(tagref, **opts)
if tag and tag['arches']:
tag_arches = [koji.canonArch(a) for a in tag['arches'].split()]
host_arches = hostdata['arches'].split()
if not set(tag_arches).intersection(host_arches):
self.logger.info('Task %s (%s): tag arches (%s) and '
'host arches (%s) are disjoint' %
(self.id, self.method,
', '.join(tag_arches), ', '.join(host_arches)))
return False
# otherwise...
# This is in principle an error condition, but this is not a good place
# to fail. Instead we proceed and let the task fail normally.
return True
class BuildArchTask(BaseBuildTask):
Methods = ['buildArch']
def weight(self):
return 1.5
def updateWeight(self, name):
"""
Update the weight of this task based on the package we're building.
weight is scaled from a minimum of 1.5 to a maximum of 6, based on
the average duration of a build of this package.
"""
try:
avg = self.session.getAverageBuildDuration(name, age=6)
except koji.ParameterError:
# for hub < 1.23
avg = self.session.getAverageBuildDuration(name)
if not avg:
return
if avg < 0:
self.logger.warning("Negative average build duration for %s: %s", name, avg)
return
# increase the task weight by 0.75 for every hour of build duration
adj = avg / 4800.0
# cap the adjustment at +4.5
weight = self.weight() + min(4.5, adj)
self.session.host.setTaskWeight(self.id, weight)
def checkHost(self, hostdata):
tag = self.params[1]
return self.checkHostArch(tag, hostdata)
def srpm_sanity_checks(self, filename):
h_fields = koji.get_header_fields(filename, ['packager', 'vendor', 'distribution'])
if not h_fields['packager']:
raise koji.BuildError("The build system failed to set the packager tag")
if not h_fields['vendor']:
raise koji.BuildError("The build system failed to set the vendor tag")
if not h_fields['distribution']:
raise koji.BuildError("The build system failed to set the distribution tag")
def handler(self, pkg, root, arch, keep_srpm, opts=None):
"""Build a package in a buildroot for one arch"""
ret = {}
if opts is None:
opts = {}
repo_id = opts.get('repo_id')
if not repo_id:
raise koji.BuildError("A repo id must be provided")
repo_info = self.session.repoInfo(repo_id, strict=True)
event_id = repo_info['create_event']
# starting srpm should already have been uploaded by parent
self.logger.debug("Reading SRPM")
fn = self.localPath("work/%s" % pkg)
if not os.path.exists(fn):
raise koji.BuildError("SRPM file missing: %s" % fn)
# peel E:N-V-R from package
h = koji.get_rpm_header(fn)
name = koji.get_header_field(h, 'name')
if not koji.get_header_field(h, 'sourcepackage'):
raise koji.BuildError("not a source package")
# Disable checking for distribution in the initial SRPM because it
# might have been built outside of the build system
# if not koji.get_header_field(h, 'distribution'):
# raise koji.BuildError, "the distribution tag is not set in the original srpm"
self.updateWeight(name)
rootopts = {
'repo_id': repo_id
}
if arch == "noarch":
# There could have been forced taskarch Exclusive/ExcludeArch,
# so we should honor it here.
task = self.session.getTaskInfo(self.id)
preferred_arch = task['arch']
else:
preferred_arch = None
br_arch = self.find_arch(arch, self.session.host.getHost(),
self.session.getBuildConfig(root, event=event_id),
preferred_arch=preferred_arch)
broot = BuildRoot(self.session, self.options, root, br_arch, self.id, **rootopts)
broot.workdir = self.workdir
self.logger.debug("Initializing buildroot")
broot.init()
# run build
self.logger.debug("Running build")
broot.build(fn, arch)
# extract results
resultdir = broot.resultdir()
rpm_files = []
srpm_files = []
log_files = list(broot.logs)
unexpected = []
for f in os.listdir(resultdir):
# files here should have one of two extensions: .log and .rpm
if f[-4:] in (".log"):
pass
# should already be in log_files
elif f[-8:] == ".src.rpm":
srpm_files.append(f)
elif f[-4:] == ".rpm":
rpm_files.append(f)
else:
unexpected.append(f)
# for noarch rpms compute rpmdiff hash
rpmdiff_hash = {self.id: {}}
for rpmf in rpm_files:
if rpmf.endswith('.noarch.rpm'):
fpath = os.path.join(resultdir, rpmf)
d = koji.rpmdiff.Rpmdiff(fpath, fpath, ignore='S5TN')
rpmdiff_hash[self.id][rpmf] = d.kojihash()
if rpmdiff_hash[self.id]:
log_name = 'noarch_rpmdiff.json'
noarch_hash_path = os.path.join(broot.workdir, log_name)
koji.dump_json(noarch_hash_path, rpmdiff_hash, indent=2, sort_keys=True)
self.uploadFile(noarch_hash_path)
log_files.append(log_name)
self.logger.debug("rpms: %r" % rpm_files)
self.logger.debug("srpms: %r" % srpm_files)
self.logger.debug("logs: %r" % log_files)
self.logger.debug("unexpected: %r" % unexpected)
# upload files to storage server
uploadpath = broot.getUploadPath()
for f in rpm_files:
self.uploadFile("%s/%s" % (resultdir, f))
self.logger.debug("keep srpm %i %s %s" % (self.id, keep_srpm, opts))
if keep_srpm:
if len(srpm_files) == 0:
raise koji.BuildError("no srpm files found for task %i" % self.id)
if len(srpm_files) > 1:
raise koji.BuildError("multiple srpm files found for task %i: %s" %
(self.id, srpm_files))
# Run sanity checks. Any failures will throw a BuildError
self.srpm_sanity_checks("%s/%s" % (resultdir, srpm_files[0]))
self.logger.debug("uploading %s/%s to %s" % (resultdir, srpm_files[0], uploadpath))
self.uploadFile("%s/%s" % (resultdir, srpm_files[0]))
if rpm_files:
ret['rpms'] = ["%s/%s" % (uploadpath, f) for f in rpm_files]
else:
ret['rpms'] = []
if keep_srpm:
ret['srpms'] = ["%s/%s" % (uploadpath, f) for f in srpm_files]
else:
ret['srpms'] = []
ret['logs'] = ["%s/%s" % (uploadpath, f) for f in log_files]
ret['brootid'] = broot.id
broot.expire()
# Let TaskManager clean up
return ret
class MavenTask(MultiPlatformTask):
Methods = ['maven']
_taskWeight = 0.2
def handler(self, url, target, opts=None):
"""Use Maven to build the source from the given url"""
if opts is None:
opts = {}
self.opts = opts
target_info = self.session.getBuildTarget(target)
if not target_info:
raise koji.BuildError('unknown build target: %s' % target)
dest_tag = self.session.getTag(target_info['dest_tag'], strict=True)
build_tag = self.session.getTag(target_info['build_tag'], strict=True)
repo_id = opts.get('repo_id')
if not repo_id:
repo = self.session.getRepo(build_tag['id'])
if repo:
repo_id = repo['id']
else:
raise koji.BuildError('no repo for tag %s' % build_tag['name'])
build_opts = dslice(opts, ['goals', 'profiles', 'properties', 'envs', 'patches',
'packages', 'jvm_options', 'maven_options', 'deps', 'scratch'],
strict=False)
build_opts['repo_id'] = repo_id
self.build_task_id = self.session.host.subtask(method='buildMaven',
arglist=[url, build_tag, build_opts],
label='build',
parent=self.id,
arch='noarch')
maven_results = self.wait(self.build_task_id)[self.build_task_id]
maven_results['task_id'] = self.build_task_id
build_info = None
if not self.opts.get('scratch'):
maven_info = maven_results['maven_info']
if maven_info['version'].endswith('-SNAPSHOT'):
raise koji.BuildError('-SNAPSHOT versions are only supported in scratch builds')
build_info = koji.maven_info_to_nvr(maven_info)
if not self.opts.get('skip_tag'):
dest_cfg = self.session.getPackageConfig(dest_tag['id'], build_info['name'])
# Make sure package is on the list for this tag
if dest_cfg is None:
raise koji.BuildError("package %s not in list for tag %s"
% (build_info['name'], dest_tag['name']))
elif dest_cfg['blocked']:
raise koji.BuildError("package %s is blocked for tag %s"
% (build_info['name'], dest_tag['name']))
build_info = self.session.host.initMavenBuild(self.id, build_info, maven_info)
self.build_id = build_info['id']
try:
rpm_results = None
spec_url = self.opts.get('specfile')
if spec_url:
rpm_results = self.buildWrapperRPM(
spec_url, self.build_task_id, target_info, build_info, repo_id)
if self.opts.get('scratch'):
self.session.host.moveMavenBuildToScratch(self.id, maven_results, rpm_results)
else:
self.session.host.completeMavenBuild(
self.id, self.build_id, maven_results, rpm_results)
except (SystemExit, ServerExit, KeyboardInterrupt):
# we do not trap these
raise
except Exception:
if not self.opts.get('scratch'):
# scratch builds do not get imported
self.session.host.failBuild(self.id, self.build_id)
# reraise the exception
raise
if not self.opts.get('scratch') and not self.opts.get('skip_tag'):
tag_task_id = self.session.host.subtask(method='tagBuild',
arglist=[dest_tag['id'],
self.build_id, False, None, True],
label='tag',
parent=self.id,
arch='noarch')
self.wait(tag_task_id)
class BuildMavenTask(BaseBuildTask):
Methods = ['buildMaven']
_taskWeight = 1.5
def _zip_dir(self, rootdir, filename):
rootbase = os.path.basename(rootdir)
roottrim = len(rootdir) - len(rootbase)
zfo = zipfile.ZipFile(filename, 'w', zipfile.ZIP_DEFLATED)
for dirpath, dirnames, filenames in os.walk(rootdir):
for skip in ['CVS', '.svn', '.git']:
if skip in dirnames:
dirnames.remove(skip)
for filename in filenames:
filepath = os.path.join(dirpath, filename)
if os.path.islink(filepath):
content = os.readlink(filepath)
st = os.lstat(filepath)
mtime = time.localtime(st.st_mtime)
info = zipfile.ZipInfo(filepath[roottrim:])
info.external_attr |= 0o120000 << 16 # symlink file type
info.compress_type = zipfile.ZIP_STORED
info.date_time = mtime[:6]
zfo.writestr(info, content)
else:
zfo.write(filepath, filepath[roottrim:])
zfo.close()
def checkHost(self, hostdata):
tag = self.params[1]
return self.checkHostArch(tag, hostdata)
def handler(self, url, build_tag, opts=None):
if opts is None:
opts = {}
self.opts = opts
scm = SCM(url, allow_password=self.options.allow_password_in_scm_url)
scm_policy_opts = {
'user_id': self.taskinfo['owner'],
'channel': self.session.getChannel(self.taskinfo['channel_id'],
strict=True)['name'],
'scratch': self.opts.get('scratch')
}
scm.assert_allowed(allowed=self.options.allowed_scms,
session=self.session,
by_config=self.options.allowed_scms_use_config,
by_policy=self.options.allowed_scms_use_policy,
policy_data=scm_policy_opts)
repo_id = opts.get('repo_id')
if not repo_id:
raise koji.BuildError('A repo_id must be provided')
repo_info = self.session.repoInfo(repo_id, strict=True)
event_id = repo_info['create_event']
br_arch = self.find_arch('noarch', self.session.host.getHost(
), self.session.getBuildConfig(build_tag['id'], event=event_id))
maven_opts = opts.get('jvm_options')
if not maven_opts:
maven_opts = []
for opt in maven_opts:
if opt.startswith('-Xmx'):
break
else:
# Give the JVM 2G to work with by default, if the build isn't specifying
# its own max. memory
maven_opts.append('-Xmx2048m')
buildroot = BuildRoot(self.session, self.options, build_tag['id'], br_arch, self.id,
install_group='maven-build', setup_dns=True, repo_id=repo_id,
maven_opts=maven_opts, maven_envs=opts.get('envs'),
deps=opts.get('deps'))
buildroot.workdir = self.workdir
self.logger.debug("Initializing buildroot")
buildroot.init()
packages = opts.get('packages')
if packages:
rv = buildroot.mock(['--install'] + packages)
self.session.host.setBuildRootState(buildroot.id, 'BUILDING')
self.session.host.updateBuildRootList(buildroot.id, buildroot.getPackageList())
if rv:
buildroot.expire()
raise koji.BuildrootError('error installing packages, %s' %
buildroot._mockResult(rv, logfile='mock_output.log'))
# existence of symlink should be sufficient
if not os.path.lexists('%s/usr/bin/mvn' % buildroot.rootdir()):
raise koji.BuildError('/usr/bin/mvn was not found in the buildroot')
scmdir = '%s/maven/build' % buildroot.rootdir()
outputdir = '%s/maven/output' % buildroot.rootdir()
m2dir = '%s/builddir/.m2' % buildroot.rootdir()
repodir = '%s/builddir/.m2/repository' % buildroot.rootdir()
patchdir = '%s/maven/patches' % buildroot.rootdir()
koji.ensuredir(scmdir)
koji.ensuredir(outputdir)
koji.ensuredir(repodir)
koji.ensuredir(patchdir)
logfile = self.workdir + '/checkout.log'
uploadpath = self.getUploadDir()
self.run_callbacks('preSCMCheckout', scminfo=scm.get_info(),
build_tag=build_tag, scratch=opts.get('scratch'),
buildroot=buildroot)
# Check out sources from the SCM
sourcedir = scm.checkout(scmdir, self.session, uploadpath, logfile)
self.run_callbacks("postSCMCheckout",
scminfo=scm.get_info(),
build_tag=build_tag,
scratch=opts.get('scratch'),
srcdir=sourcedir,
buildroot=buildroot)
# zip up pristine sources for auditing purposes
self._zip_dir(sourcedir, os.path.join(outputdir, 'scm-sources.zip'))
# Checkout out patches, if present
if self.opts.get('patches'):
patchlog = self.workdir + '/patches.log'
patch_scm = SCM(self.opts.get('patches'),
allow_password=self.options.allow_password_in_scm_url)
patch_scm.assert_allowed(allowed=self.options.allowed_scms,
session=self.session,
by_config=self.options.allowed_scms_use_config,
by_policy=self.options.allowed_scms_use_policy,
policy_data=scm_policy_opts)
self.run_callbacks('preSCMCheckout', scminfo=patch_scm.get_info(),
build_tag=build_tag, scratch=opts.get('scratch'),
buildroot=buildroot)
# never try to check out a common/ dir when checking out patches
patch_scm.use_common = False
patchcheckoutdir = patch_scm.checkout(patchdir, self.session, uploadpath, patchlog)
self.run_callbacks("postSCMCheckout",
scminfo=patch_scm.get_info(),
build_tag=build_tag,
scratch=opts.get('scratch'),
srcdir=patchcheckoutdir,
buildroot=buildroot)
self._zip_dir(patchcheckoutdir, os.path.join(outputdir, 'patches.zip'))
# Apply patches, if present
if self.opts.get('patches'):
# filter out directories and files beginning with . (probably scm metadata)
patches = [patch for patch in os.listdir(patchcheckoutdir)
if os.path.isfile(os.path.join(patchcheckoutdir, patch)) and
patch.endswith('.patch')]
if not patches:
raise koji.BuildError('no patches found at %s' % self.opts.get('patches'))
patches.sort()
for patch in patches:
cmd = ['/usr/bin/patch', '--verbose', '--no-backup-if-mismatch', '-d',
sourcedir, '-p1', '-i', os.path.join(patchcheckoutdir, patch)]
ret = log_output(self.session, cmd[0], cmd,
patchlog, uploadpath, logerror=1, append=1)
if ret:
raise koji.BuildError(
'error applying patches from %s, see patches.log for details' %
self.opts.get('patches'))
# Set ownership of the entire source tree to the mock user
uid = pwd.getpwnam(self.options.mockuser)[2]
gid = grp.getgrnam('mock')[2]
self.chownTree(scmdir, uid, gid)
self.chownTree(outputdir, uid, gid)
self.chownTree(m2dir, uid, gid)
if self.opts.get('patches'):
self.chownTree(patchdir, uid, gid)
settingsfile = '/builddir/.m2/settings.xml'
buildroot.writeMavenSettings(settingsfile, outputdir)
pomfile = 'pom.xml'
maven_options = self.opts.get('maven_options', [])
for i, opt in enumerate(maven_options):
if opt == '-f' or opt == '--file':
if len(maven_options) > (i + 1):
pomfile = maven_options[i + 1]
break
else:
raise koji.BuildError('%s option requires a file path' % opt)
elif opt.startswith('-f=') or opt.startswith('--file='):
pomfile = opt.split('=', 1)[1]
break
elif opt.startswith('-f'):
pomfile = opt[2:]
break
buildroot.mavenBuild(sourcedir, outputdir, repodir,
props=self.opts.get('properties'), profiles=self.opts.get('profiles'),
options=self.opts.get('maven_options'), goals=self.opts.get('goals'))
build_pom = os.path.join(sourcedir, pomfile)
if not os.path.exists(build_pom):
raise koji.BuildError('%s does not exist' % pomfile)
pom_info = koji.parse_pom(build_pom)
maven_info = koji.pom_to_maven_info(pom_info)
# give the zip files more descriptive names
os.rename(os.path.join(outputdir, 'scm-sources.zip'),
os.path.join(outputdir, maven_info['artifact_id'] + '-' +
maven_info['version'] + '-scm-sources.zip'))
if self.opts.get('patches'):
os.rename(os.path.join(outputdir, 'patches.zip'),
os.path.join(outputdir, maven_info['artifact_id'] + '-' +
maven_info['version'] + '-patches.zip'))
logs = ['checkout.log']
if self.opts.get('patches'):
logs.append('patches.log')
output_files = {}
for path, dirs, files in os.walk(outputdir):
if not files:
continue
reldir = path[len(outputdir) + 1:]
for filename in files:
root, ext = os.path.splitext(filename)
if ext == '.log':
logs.append(os.path.join(reldir, filename))
else:
output_files.setdefault(reldir, []).append(filename)
# upload the build output
for filepath in logs:
self.uploadFile(os.path.join(outputdir, filepath),
relPath=os.path.dirname(filepath))
for relpath, files in six.iteritems(output_files):
for filename in files:
self.uploadFile(os.path.join(outputdir, relpath, filename),
relPath=relpath)
# Also include the logs already upload by BuildRoot
logs.extend(buildroot.logs)
buildroot.expire()
return {'maven_info': maven_info,
'buildroot_id': buildroot.id,
'logs': logs,
'files': output_files}
class WrapperRPMTask(BaseBuildTask):
"""Build a wrapper rpm around archives output from a Maven or Windows build.
May either be called as a subtask or as a separate
top-level task. In the latter case it can either associate the new rpms
with the existing build or create a new build."""
Methods = ['wrapperRPM']
_taskWeight = 1.5
def copy_fields(self, src, tgt, *fields):
for field in fields:
tgt[field] = src.get(field)
def spec_sanity_checks(self, filename):
spec = koji._open_text_file(filename).read()
for tag in ("Packager", "Distribution", "Vendor"):
if re.match("%s:" % tag, spec, re.M):
raise koji.BuildError("%s is not allowed to be set in spec file" % tag)
for tag in ("packager", "distribution", "vendor"):
if re.match(r"%%define\s+%s\s+" % tag, spec, re.M):
raise koji.BuildError("%s is not allowed to be defined in spec file" % tag)
def checkHost(self, hostdata):
target = self.params[1]
return self.checkHostArch(target['build_tag'], hostdata)
def handler(self, spec_url, build_target, build, task, opts=None):
if not opts:
opts = {}
if not (build or task):
raise koji.BuildError('build and/or task must be specified')
values = {}
if build:
maven_info = self.session.getMavenBuild(build['id'], strict=False)
win_info = self.session.getWinBuild(build['id'], strict=False)
image_info = self.session.getImageBuild(build['id'], strict=False)
else:
maven_info = None
win_info = None
image_info = None
# list of artifact paths relative to kojiroot (not exposed to the specfile)
artifact_relpaths = []
# map of file extension to a list of files
artifacts = {}
# list of all files
all_artifacts = []
# list of all files with their repo path
all_artifacts_with_path = []
# makes generating relative paths easier
self.pathinfo = koji.PathInfo(topdir='')
if task:
# called as a subtask of a build
artifact_data = self.session.listTaskOutput(task['id'], all_volumes=True)
for artifact_path in artifact_data:
artifact_name = os.path.basename(artifact_path)
base, ext = os.path.splitext(artifact_name)
if ext == '.log':
# Exclude log files for consistency with the output of listArchives() used
# below
continue
relpath = os.path.join(self.pathinfo.task(task['id']), artifact_path)[1:]
for volume in artifact_data[artifact_path]:
volume_path = os.path.join(self.pathinfo.volumedir(volume), relpath)
artifact_relpaths.append(volume_path)
artifacts.setdefault(ext, []).append(artifact_name)
all_artifacts.append(artifact_name)
all_artifacts_with_path.append(volume_path)
else:
# called as a top-level task to create wrapper rpms for an existing build
# verify that the build is complete
if not build['state'] == koji.BUILD_STATES['COMPLETE']:
raise koji.BuildError(
'cannot call wrapperRPM on a build that did not complete successfully')
# get the list of files from the build instead of the task,
# because the task output directory may have already been cleaned up
if maven_info:
build_artifacts = self.session.listArchives(buildID=build['id'], type='maven')
elif win_info:
build_artifacts = self.session.listArchives(buildID=build['id'], type='win')
elif image_info:
build_artifacts = self.session.listArchives(buildID=build['id'], type='image')
else:
raise koji.BuildError('unsupported build type')
for artifact in build_artifacts:
artifact_name = artifact['filename']
base, ext = os.path.splitext(artifact_name)
artifacts.setdefault(ext, []).append(artifact_name)
all_artifacts.append(artifact_name)
if ext == '.log':
# listArchives() should never return .log files, but we check for completeness
continue
if maven_info:
repopath = self.pathinfo.mavenfile(artifact)
relpath = os.path.join(self.pathinfo.mavenbuild(build), repopath)[1:]
artifact_relpaths.append(relpath)
all_artifacts_with_path.append(repopath)
elif win_info:
repopath = self.pathinfo.winfile(artifact)
relpath = os.path.join(self.pathinfo.winbuild(build), repopath)[1:]
artifact_relpaths.append(relpath)
all_artifacts_with_path.append(repopath)
elif image_info:
ipath = self.pathinfo.imagebuild(build)
relpath = os.path.join(ipath, artifact_name)[1:]
artifact_relpaths.append(relpath)
all_artifacts_with_path.append(artifact_name)
else:
# can't happen
assert False # pragma: no cover
if not artifacts:
raise koji.BuildError('no output found for %s' % (
task and koji.taskLabel(task) or koji.buildLabel(build)))
values['artifacts'] = artifacts
values['all_artifacts'] = all_artifacts
values['all_artifacts_with_path'] = all_artifacts_with_path
if build:
self.copy_fields(build, values, 'epoch', 'name', 'version', 'release')
if maven_info:
values['maven_info'] = maven_info
elif win_info:
values['win_info'] = win_info
elif image_info:
values['image_info'] = image_info
else:
# can't happen
assert False # pragma: no cover
else:
task_result = self.session.getTaskResult(task['id'])
if task['method'] == 'buildMaven':
maven_info = task_result['maven_info']
maven_nvr = koji.maven_info_to_nvr(maven_info)
maven_nvr['release'] = '0.scratch'
self.copy_fields(maven_nvr, values, 'epoch', 'name', 'version', 'release')
values['maven_info'] = maven_info
elif task['method'] == 'vmExec':
self.copy_fields(task_result, values, 'epoch', 'name', 'version', 'release')
values['win_info'] = {'platform': task_result['platform']}
elif task['method'] in ('createLiveCD', 'createAppliance', 'createImage',
'createLiveMedia'):
self.copy_fields(task_result, values, 'epoch', 'name', 'version', 'release')
else:
# can't happen
assert False # pragma: no cover
scm = SCM(spec_url, allow_password=self.options.allow_password_in_scm_url)
scm.assert_allowed(allowed=self.options.allowed_scms,
session=self.session,
by_config=self.options.allowed_scms_use_config,
by_policy=self.options.allowed_scms_use_policy,
policy_data={
'user_id': self.taskinfo['owner'],
'channel': self.session.getChannel(self.taskinfo['channel_id'],
strict=True)['name'],
'scratch': opts.get('scratch')
})
if opts.get('create_build') and opts.get('custom_user_metadata'):
try:
json.dumps(opts['custom_user_metadata'])
except TypeError:
error_msg = 'custom_user_metadata is not JSON serializable'
raise koji.BuildError(error_msg)
repo_id = opts.get('repo_id')
if not repo_id:
raise koji.BuildError("A repo id must be provided")
repo_info = self.session.repoInfo(repo_id, strict=True)
event_id = repo_info['create_event']
build_tag = self.session.getTag(build_target['build_tag'], strict=True)
br_arch = self.find_arch('noarch', self.session.host.getHost(
), self.session.getBuildConfig(build_tag['id'], event=event_id))
buildroot = BuildRoot(self.session, self.options, build_tag['id'], br_arch, self.id,
install_group='wrapper-rpm-build', repo_id=repo_id)
buildroot.workdir = self.workdir
self.logger.debug("Initializing buildroot")
buildroot.init()
logfile = os.path.join(self.workdir, 'checkout.log')
scmdir = buildroot.tmpdir() + '/scmroot'
koji.ensuredir(scmdir)
self.run_callbacks('preSCMCheckout', scminfo=scm.get_info(),
build_tag=build_tag, scratch=opts.get('scratch'),
buildroot=buildroot)
specdir = scm.checkout(scmdir, self.session, self.getUploadDir(), logfile)
self.run_callbacks("postSCMCheckout",
scminfo=scm.get_info(),
build_tag=build_tag,
scratch=opts.get('scratch'),
srcdir=specdir,
buildroot=buildroot)
# get the source before chown, git > 2.35.2 would refuse to that later
source = scm.get_source()
spec_template = None
for path, dir, files in os.walk(specdir):
files.sort()
for filename in files:
if filename.endswith('.spec.tmpl'):
spec_template = os.path.join(path, filename)
break
if not spec_template:
raise koji.BuildError('no spec file template found at URL: %s' % spec_url)
# Put the jars into the same directory as the specfile. This directory will be
# set to the rpm _sourcedir so other files in the SCM may be referenced in the
# specfile as well.
specdir = os.path.dirname(spec_template)
for relpath in artifact_relpaths:
localpath = self.localPath(relpath)
# RPM requires all SOURCE files in the srpm to be in the same directory, so
# we flatten any directory structure of the output files here.
# If multiple files in the build have the same basename, duplicate files will
# have their relative path prepended to their name, with / replaced with -.
destpath = os.path.join(specdir, os.path.basename(relpath))
if os.path.exists(destpath):
destpath = os.path.join(specdir, relpath.replace('/', '-'))
shutil.copy(localpath, destpath)
# change directory to the specdir to the template can reference files there
os.chdir(specdir)
contents = Cheetah.Template.Template(file=spec_template,
searchList=[values]).respond()
contents = contents.encode('utf-8')
specfile = spec_template[:-5]
with open(specfile, 'wb') as specfd:
specfd.write(contents)
uploadpath = self.getUploadDir()
self.session.uploadWrapper(specfile, uploadpath)
# Run spec file sanity checks. Any failures will throw a BuildError
self.spec_sanity_checks(specfile)
# chown the specdir to the mock user, because srpm creation happens
# as an unprivileged user
uid = pwd.getpwnam(self.options.mockuser)[2]
gid = grp.getgrnam('mock')[2]
self.chownTree(specdir, uid, gid)
# build srpm
self.logger.debug("Running srpm build")
buildroot.build_srpm(specfile, specdir, None)
srpms = glob.glob('%s/*.src.rpm' % buildroot.resultdir())
if len(srpms) == 0:
raise koji.BuildError('no srpms found in %s' % buildroot.resultdir())
elif len(srpms) > 1:
raise koji.BuildError('multiple srpms found in %s: %s' %
(buildroot.resultdir(), ', '.join(srpms)))
else:
srpm = srpms[0]
shutil.move(srpm, self.workdir)
srpm = os.path.join(self.workdir, os.path.basename(srpm))
self.new_build_id = None
if opts.get('create_build') and not opts.get('scratch'):
h = koji.get_rpm_header(srpm)
data = koji.get_header_fields(h, ['name', 'version', 'release', 'epoch'])
data['task_id'] = self.id
data['source'] = source['source']
data['extra'] = {'source': {'original_url': source['url']}}
if opts.get('custom_user_metadata'):
data['extra']['custom_user_metadata'] = opts['custom_user_metadata']
# pass draft option in
if opts.get('draft'):
data['draft'] = opts.get('draft')
self.logger.info("Reading package config for %(name)s" % data)
pkg_cfg = self.session.getPackageConfig(build_target['dest_tag'], data['name'])
if not opts.get('skip_tag'):
# Make sure package is on the list for this tag
if pkg_cfg is None:
raise koji.BuildError("package %s not in list for tag %s"
% (data['name'], build_target['dest_tag_name']))
elif pkg_cfg['blocked']:
raise koji.BuildError("package %s is blocked for tag %s"
% (data['name'], build_target['dest_tag_name']))
self.new_build_id = self.session.host.initBuild(data)
try:
buildroot.build(srpm)
except (SystemExit, ServerExit, KeyboardInterrupt):
raise
except Exception:
if self.new_build_id:
self.session.host.failBuild(self.id, self.new_build_id)
raise
resultdir = buildroot.resultdir()
srpm = None
rpms = []
specfile_name = os.path.basename(specfile)
logs = ['checkout.log', specfile_name] + list(buildroot.logs)
for filename in os.listdir(resultdir):
if filename.endswith('.src.rpm'):
if not srpm:
srpm = filename
else:
if self.new_build_id:
self.session.host.failBuild(self.id, self.new_build_id)
raise koji.BuildError('multiple srpms found in %s: %s, %s' %
(resultdir, srpm, filename))
elif filename.endswith('.rpm'):
rpms.append(filename)
elif filename.endswith('.log'):
pass
# already included in buildroot.logs
else:
if self.new_build_id:
self.session.host.failBuild(self.id, self.new_build_id)
raise koji.BuildError('unexpected file found in %s: %s' %
(resultdir, filename))
if not srpm:
if self.new_build_id:
self.session.host.failBuild(self.id, self.new_build_id)
raise koji.BuildError('no srpm found')
if not rpms:
if self.new_build_id:
self.session.host.failBuild(self.id, self.new_build_id)
raise koji.BuildError('no rpms found')
try:
for rpm_fn in [srpm] + rpms:
self.uploadFile(os.path.join(resultdir, rpm_fn))
except (SystemExit, ServerExit, KeyboardInterrupt):
raise
except Exception:
if self.new_build_id:
self.session.host.failBuild(self.id, self.new_build_id)
raise
results = {'buildroot_id': buildroot.id,
'srpm': srpm,
'rpms': rpms,
'logs': logs,
'source': source}
if opts.get('create_build') and opts.get('custom_user_metadata'):
results['custom_user_metadata'] = opts['custom_user_metadata']
if not task:
# Called as a standalone top-level task, so handle the rpms now.
# Otherwise we let the parent task handle it.
uploaddir = self.getUploadDir()
relsrpm = uploaddir + '/' + srpm
relrpms = [uploaddir + '/' + rpm for rpm in rpms]
rellogs = [uploaddir + '/' + log for log in logs]
if opts.get('scratch'):
self.session.host.moveBuildToScratch(
self.id, relsrpm, relrpms, {'noarch': rellogs})
else:
if opts.get('create_build'):
brmap = dict.fromkeys([relsrpm] + relrpms, buildroot.id)
try:
self.session.host.completeBuild(self.id, self.new_build_id,
relsrpm, relrpms, brmap,
{'noarch': rellogs})
except (SystemExit, ServerExit, KeyboardInterrupt):
raise
except Exception:
self.session.host.failBuild(self.id, self.new_build_id)
raise
if not opts.get('skip_tag'):
tag_task_id = self.session.host.subtask(method='tagBuild',
arglist=[build_target['dest_tag'],
self.new_build_id, False,
None, True],
label='tag', parent=self.id,
arch='noarch')
self.wait(tag_task_id)
else:
self.session.host.importWrapperRPMs(self.id, build['id'], results)
# no need to upload logs, they've already been streamed to the hub
# during the build process
buildroot.expire()
return results
class ChainMavenTask(MultiPlatformTask):
Methods = ['chainmaven']
_taskWeight = 0.2
def handler(self, builds, target, opts=None):
"""Run a sequence of Maven builds in dependency order"""
if not opts:
opts = {}
target_info = self.session.getBuildTarget(target)
if not target_info:
raise koji.BuildError('unknown build target: %s' % target)
dest_tag = self.session.getTag(target_info['dest_tag'], strict=True)
if not (opts.get('scratch') or opts.get('skip_tag')):
for package in builds:
dest_cfg = self.session.getPackageConfig(dest_tag['id'], package)
# Make sure package is on the list for this tag
if dest_cfg is None:
raise koji.BuildError("package %s not in list for tag %s"
% (package, dest_tag['name']))
elif dest_cfg['blocked']:
raise koji.BuildError("package %s is blocked for tag %s"
% (package, dest_tag['name']))
self.depmap = {}
for package, params in builds.items():
self.depmap[package] = set(params.get('buildrequires', []))
todo = copy.deepcopy(self.depmap)
running = {}
self.done = {}
self.results = []
while True:
ready = [package for package, deps in todo.items() if not deps]
if not ready and not running:
break
for package in ready:
params = builds[package]
buildtype = params.get('type', 'maven')
task_url = params['scmurl']
task_opts = dslice_ex(params, ['scmurl', 'buildrequires', 'type'], strict=False)
if buildtype == 'maven':
task_deps = list(self.depset(package))
if task_deps:
task_opts['deps'] = task_deps
if not opts.get('force'):
# check for a duplicate build (a build performed with the
# same scmurl and options)
dup_build = self.get_duplicate_build(
dest_tag['name'], package, params, task_opts)
# if we find one, mark the package as built and remove it from todo
if dup_build:
self.done[package] = dup_build['nvr']
for deps in todo.values():
deps.discard(package)
del todo[package]
self.results.append('%s previously built from %s' %
(dup_build['nvr'], task_url))
continue
task_opts.update(dslice(opts, ['skip_tag', 'scratch'], strict=False))
if buildtype == 'maven':
if opts.get('debug'):
task_opts.setdefault('maven_options', []).append('--debug')
task_id = self.subtask('maven', [task_url, target, task_opts],
label=package)
elif buildtype == 'wrapper':
pkg_to_wrap = params['buildrequires'][0]
to_wrap = self.done[pkg_to_wrap]
if isinstance(to_wrap, six.integer_types):
task_to_wrap = self.session.getTaskInfo(to_wrap, request=True)
build_to_wrap = None
else:
build_to_wrap = self.session.getBuild(to_wrap, strict=True)
task_to_wrap = None
target_info = self.session.getBuildTarget(target, strict=True)
repo_info = self.getRepo(target_info['build_tag'])
task_opts['repo_id'] = repo_info['id']
task_id = self.subtask('wrapperRPM', [task_url, target_info,
build_to_wrap, task_to_wrap,
task_opts],
label=package)
else:
raise koji.BuildError('unsupported build type: %s' % buildtype)
running[task_id] = package
del todo[package]
try:
results = self.wait(to_list(running.keys()))
except (six.moves.xmlrpc_client.Fault, koji.GenericError):
# One task has failed, wait for the rest to complete before the
# chainmaven task fails. self.wait(all=True) should thrown an exception.
self.wait(all=True)
raise
# if we get here, results is a map whose keys are the ids of tasks
# that have completed successfully
for task_id in results:
package = running.pop(task_id)
task_url = builds[package]['scmurl']
if opts.get('scratch'):
if builds[package].get('type') == 'wrapper':
self.done[package] = task_id
else:
children = self.session.getTaskChildren(task_id)
for child in children:
# we want the ID of the buildMaven task because the
# output dir of that task is where the Maven repo is
if child['method'] == 'buildMaven':
self.done[package] = child['id']
break
else:
raise koji.BuildError(
'could not find buildMaven subtask of %s' % task_id)
self.results.append('%s built from %s by task %s' %
(package, task_url, task_id))
else:
task_builds = self.session.listBuilds(taskID=task_id)
if not task_builds:
raise koji.BuildError('could not find build for task %s' % task_id)
task_build = task_builds[0]
self.done[package] = task_build['nvr']
self.results.append('%s built from %s' % (task_build['nvr'], task_url))
for deps in todo.values():
deps.discard(package)
if todo:
# should never happen, the client should have checked for circular dependencies
raise koji.BuildError('unable to run chain build, circular dependencies')
return self.results
def depset(self, package):
deps = set()
for dep in self.depmap[package]:
deps.add(self.done[dep])
deps.update(self.depset(dep))
return deps
def dicts_equal(self, a, b):
"""Check if two dicts are equal. They are considered equal if they
have the same keys and those keys have the same values. If a value is
list, it will be considered equal to a list with the same values in
a different order."""
akeys = to_list(a.keys())
bkeys = to_list(b.keys())
if sorted(akeys) != sorted(bkeys):
return False
for key in akeys:
aval = a.get(key)
bval = b.get(key)
if not isinstance(aval, type(bval)):
return False
if isinstance(aval, dict):
if not self.dicts_equal(aval, bval):
return False
elif isinstance(aval, list):
if not sorted(aval) == sorted(bval):
return False
else:
if not aval == bval:
return False
return True
def get_duplicate_build(self, tag, package, params, task_opts):
"""Find the latest build of package in tag and compare it to the
scmurl and task_opts. If they're identical, return the build."""
builds = self.session.getLatestBuilds(tag, package=package)
if not builds:
return None
build = builds[0]
if not build['task_id']:
return None
build_task = self.session.getTaskInfo(build['task_id'], request=True)
request = build_task['request']
if request[0] != params['scmurl']:
return None
if params.get('type') == 'wrapper':
wrapped_build = request[2]
pkg_to_wrap = params['buildrequires'][0]
nvr_to_wrap = self.done[pkg_to_wrap]
if wrapped_build['nvr'] != nvr_to_wrap:
return None
# For a wrapper-rpm build, the only parameters that really matter
# are the scmurl and the wrapped NVR. These both match, so
# return the existing build.
return build
if len(request) > 2:
build_opts = dslice_ex(request[2], ['skip_tag', 'scratch'], strict=False)
else:
build_opts = {}
task_opts = copy.deepcopy(task_opts)
# filter out options that don't affect the build output
# to avoid unnecessary rebuilds
for opts in [build_opts, task_opts]:
if 'maven_options' in opts:
maven_options = opts['maven_options']
for opt in ['-e', '--errors', '-q', '--quiet',
'-V', '--show-version', '-X', '--debug']:
if opt in maven_options:
maven_options.remove(opt)
if not maven_options:
del opts['maven_options']
if 'jvm_options' in opts:
del opts['jvm_options']
if not self.dicts_equal(build_opts, task_opts):
return None
# everything matches
return build
class TagBuildTask(BaseTaskHandler):
Methods = ['tagBuild']
# XXX - set weight?
def handler(self, tag_id, build_id, force=False, fromtag=None, ignore_success=False):
task = self.session.getTaskInfo(self.id)
user_id = task['owner']
try:
self.session.getBuild(build_id, strict=True)
self.session.getTag(tag_id, strict=True)
# several basic sanity checks have already been run (and will be run
# again when we make the final call). Our job is to perform the more
# computationally expensive 'post' tests.
# XXX - add more post tests
self.session.host.tagBuild(self.id, tag_id, build_id, force=force, fromtag=fromtag)
self.session.host.tagNotification(
True, tag_id, fromtag, build_id, user_id, ignore_success)
except Exception as e:
exctype, value = sys.exc_info()[:2]
self.session.host.tagNotification(
False, tag_id, fromtag, build_id, user_id, ignore_success, "%s: %s" %
(exctype, value))
raise e
class BuildImageTask(MultiPlatformTask):
def initImageBuild(self, name, version, release, target_info, opts):
"""create a build object for this image build"""
pkg_cfg = self.session.getPackageConfig(target_info['dest_tag_name'],
name)
self.logger.debug("%r" % pkg_cfg)
if not opts.get('skip_tag') and not opts.get('scratch'):
# Make sure package is on the list for this tag
if pkg_cfg is None:
raise koji.BuildError("package (image) %s not in list for tag %s" %
(name, target_info['dest_tag_name']))
elif pkg_cfg['blocked']:
raise koji.BuildError("package (image) %s is blocked for tag %s" %
(name, target_info['dest_tag_name']))
return self.session.host.initImageBuild(self.id,
dict(name=name, version=version, release=release,
epoch=0))
class BuildBaseImageTask(BuildImageTask):
Methods = ['image']
def handler(self, name, version, arches, target, inst_tree, opts=None):
"""Governing task for building an appliance using Oz"""
target_info = self.session.getBuildTarget(target, strict=True)
build_tag = target_info['build_tag']
repo_info = self.getRepo(build_tag)
# check requested arches against build tag
buildconfig = self.session.getBuildConfig(build_tag)
if not buildconfig['arches']:
raise koji.BuildError("No arches for tag %(name)s [%(id)s]" % buildconfig)
tag_archlist = [koji.canonArch(a) for a in buildconfig['arches'].split()]
for arch in arches:
if koji.canonArch(arch) not in tag_archlist:
raise koji.BuildError("Invalid arch for build tag: %s" % arch)
if not opts:
opts = {}
if not ozif_enabled:
self.logger.error(
"ImageFactory features require the following dependencies: pykickstart, "
"imagefactory, oz and possibly python-hashlib")
raise koji.ApplianceError('ImageFactory functions not available')
# Policy check
task_info = self.session.getTaskInfo(self.id)
policy_data = {
'user_id': task_info['owner'],
'source': opts.get('ksurl'),
'task_id': self.id,
'build_tag': build_tag, # id
'skip_tag': bool(self.opts.get('skip_tag')),
'scratch': bool(opts.get('scratch')),
'from_scm': False,
'repo_id': opts.get('repo_id'),
'target': target_info['name'],
}
if not self.opts.get('skip_tag'):
policy_data['tag'] = target_info['dest_tag'] # id
self.session.host.assertPolicy('build_rpm', policy_data)
# build image(s)
bld_info = None
try:
release = opts.get('release')
if '-' in version:
raise koji.ApplianceError('The Version may not have a hyphen')
if release and '-' in release:
raise koji.ApplianceError('The Release may not have a hyphen')
if not opts.get('scratch'):
bld_info = self.initImageBuild(name, version, release,
target_info, opts)
release = bld_info['release']
elif not release:
# scratch build should have some reasonable release
release = self.session.getNextRelease(dict(name=name, version=version))
subtasks = {}
self.logger.debug("Spawning jobs for image arches: %r" % (arches))
canfail = []
for arch in arches:
inst_url = inst_tree.replace('$arch', arch)
subtasks[arch] = self.session.host.subtask(
method='createImage',
arglist=[name, version, release, arch, target_info,
build_tag, repo_info, inst_url, opts],
label=arch, parent=self.id, arch=arch)
if arch in opts.get('optional_arches', []):
canfail.append(subtasks[arch])
self.logger.debug("Got image subtasks: %r" % (subtasks))
self.logger.debug("Waiting on image subtasks (%s can fail)..." % canfail)
results = self.wait(to_list(subtasks.values()), all=True,
failany=True, canfail=canfail)
# if everything failed, fail even if all subtasks are in canfail
self.logger.debug('subtask results: %r', results)
all_failed = True
for result in results.values():
if not isinstance(result, dict) or 'faultCode' not in result:
all_failed = False
break
if all_failed:
raise koji.GenericError("all subtasks failed")
# determine ignored arch failures
ignored_arches = set()
for arch in arches:
if arch in opts.get('optional_arches', []):
task_id = subtasks[arch]
result = results[task_id]
if isinstance(result, dict) and 'faultCode' in result:
ignored_arches.add(arch)
# wrap in an RPM if asked
spec_url = opts.get('specfile')
for arch in arches:
# get around an xmlrpc limitation, use arches for keys instead
results[arch] = results[subtasks[arch]]
del results[subtasks[arch]]
if spec_url and arch not in ignored_arches:
subtask = subtasks[arch]
results[arch]['rpmresults'] = self.buildWrapperRPM(
spec_url, subtask, target_info, bld_info,
repo_info['id'])
# make sure we only import the user-submitted kickstart file one
# time, otherwise we will have collisions. Remove it from exactly
# 1 results hash from the subtasks
if 'kickstart' in opts:
saw_ks = False
for arch in results:
if arch in ignored_arches:
continue
ks = os.path.basename(opts['kickstart'])
if ks in results[arch]['files']:
if saw_ks:
results[arch]['files'].remove(ks)
saw_ks = True
self.logger.debug('Image Results for hub: %s' % results)
if opts.get('scratch'):
self.session.host.moveImageBuildToScratch(self.id, results)
else:
self.session.host.completeImageBuild(self.id, bld_info['id'],
results)
except (SystemExit, ServerExit, KeyboardInterrupt):
# we do not trap these
raise
except Exception:
if not opts.get('scratch'):
# scratch builds do not get imported
if bld_info:
self.session.host.failBuild(self.id, bld_info['id'])
# reraise the exception
raise
# tag it
if not opts.get('scratch') and not opts.get('skip_tag'):
tag_task_id = self.session.host.subtask(method='tagBuild',
arglist=[target_info['dest_tag'],
bld_info['id'], False, None, True],
label='tag', parent=self.id, arch='noarch')
self.wait(tag_task_id)
# report results
report = ''
if opts.get('scratch'):
respath = ', '.join(
[os.path.join(koji.pathinfo.work(),
koji.pathinfo.taskrelpath(tid)) for tid in subtasks.values()])
report += 'Scratch '
else:
respath = koji.pathinfo.imagebuild(bld_info)
report += 'image build results in: %s' % respath
return report
class BuildApplianceTask(BuildImageTask):
Methods = ['appliance']
def handler(self, name, version, arch, target, ksfile, opts=None):
"""Governing task for building an appliance"""
target_info = self.session.getBuildTarget(target, strict=True)
build_tag = target_info['build_tag']
repo_info = self.getRepo(build_tag)
# check requested arch against build tag
buildconfig = self.session.getBuildConfig(build_tag)
if not buildconfig['arches']:
raise koji.BuildError("No arches for tag %(name)s [%(id)s]" % buildconfig)
tag_archlist = [koji.canonArch(a) for a in buildconfig['arches'].split()]
if koji.canonArch(arch) not in tag_archlist:
raise koji.BuildError("Invalid arch for build tag: %s" % arch)
if not opts:
opts = {}
if not image_enabled:
self.logger.error(
"Appliance features require the following dependencies: "
"pykickstart, and possibly python-hashlib")
raise koji.ApplianceError('Appliance functions not available')
# build image
bld_info = None
try:
release = opts.get('release')
if not opts.get('scratch'):
bld_info = self.initImageBuild(name, version, release,
target_info, opts)
release = bld_info['release']
create_task_id = self.session.host.subtask(method='createAppliance',
arglist=[name, version, release, arch,
target_info, build_tag,
repo_info, ksfile, opts],
label='appliance', parent=self.id,
arch=arch)
results = self.wait(create_task_id)
self.logger.info('image build task (%s) completed' % create_task_id)
self.logger.info('results: %s' % results)
# wrap in an RPM if asked
spec_url = opts.get('specfile')
if spec_url:
results[create_task_id]['rpmresults'] = self.buildWrapperRPM(
spec_url, create_task_id,
target_info, bld_info, repo_info['id'])
results[str(create_task_id)] = results[create_task_id]
del results[create_task_id]
# import the image (move it too)
if not opts.get('scratch'):
self.session.host.completeImageBuild(self.id, bld_info['id'], results)
else:
self.session.host.moveImageBuildToScratch(self.id, results)
except (SystemExit, ServerExit, KeyboardInterrupt):
# we do not trap these
raise
except Exception:
if not opts.get('scratch'):
# scratch builds do not get imported
if bld_info:
self.session.host.failBuild(self.id, bld_info['id'])
# reraise the exception
raise
# tag it
if not opts.get('scratch') and not opts.get('skip_tag'):
tag_task_id = self.session.host.subtask(method='tagBuild',
arglist=[target_info['dest_tag'],
bld_info['id'], False, None, True],
label='tag', parent=self.id, arch='noarch')
self.wait(tag_task_id)
# report results
if opts.get('scratch'):
respath = os.path.join(koji.pathinfo.work(),
koji.pathinfo.taskrelpath(create_task_id))
report = 'Scratch '
else:
respath = koji.pathinfo.imagebuild(bld_info)
report = ''
report += 'appliance build results in: %s' % respath
return report
class BuildLiveCDTask(BuildImageTask):
Methods = ['livecd']
def handler(self, name, version, arch, target, ksfile, opts=None):
"""Governing task for building LiveCDs"""
target_info = self.session.getBuildTarget(target, strict=True)
build_tag = target_info['build_tag']
repo_info = self.getRepo(build_tag)
# check requested arch against build tag
buildconfig = self.session.getBuildConfig(build_tag)
if not buildconfig['arches']:
raise koji.BuildError("No arches for tag %(name)s [%(id)s]" % buildconfig)
tag_archlist = [koji.canonArch(a) for a in buildconfig['arches'].split()]
if koji.canonArch(arch) not in tag_archlist:
raise koji.BuildError("Invalid arch for build tag: %s" % arch)
if not opts:
opts = {}
if not image_enabled:
self.logger.error("LiveCD features require the following dependencies: "
"pykickstart, pycdio, and possibly python-hashlib")
raise koji.LiveCDError('LiveCD functions not available')
# build the image
bld_info = None
try:
if opts.get('release') is None:
release = self.session.getNextRelease({'name': name, 'version': version})
else:
release = opts.get('release')
if not opts.get('scratch'):
bld_info = self.initImageBuild(name, version, release,
target_info, opts)
release = bld_info['release']
create_task_id = self.session.host.subtask(method='createLiveCD',
arglist=[name, version, release, arch,
target_info, build_tag,
repo_info, ksfile, opts],
label='livecd', parent=self.id, arch=arch)
results = self.wait(create_task_id)
self.logger.info('image build task (%s) completed' % create_task_id)
self.logger.info('results: %s' % results)
# wrap in an RPM if needed
spec_url = opts.get('specfile')
if spec_url:
results[create_task_id]['rpmresults'] = self.buildWrapperRPM(
spec_url, create_task_id,
target_info, bld_info, repo_info['id'])
results[str(create_task_id)] = results[create_task_id]
del results[create_task_id]
# import it (and move)
if not opts.get('scratch'):
self.session.host.completeImageBuild(self.id, bld_info['id'], results)
else:
self.session.host.moveImageBuildToScratch(self.id, results)
except (SystemExit, ServerExit, KeyboardInterrupt):
# we do not trap these
raise
except Exception:
if not opts.get('scratch'):
# scratch builds do not get imported
if bld_info:
self.session.host.failBuild(self.id, bld_info['id'])
# reraise the exception
raise
# tag it if necessary
if not opts.get('scratch') and not opts.get('skip_tag'):
tag_task_id = self.session.host.subtask(method='tagBuild',
arglist=[target_info['dest_tag'],
bld_info['id'], False, None, True],
label='tag', parent=self.id, arch='noarch')
self.wait(tag_task_id)
# report the results
if opts.get('scratch'):
respath = os.path.join(koji.pathinfo.work(),
koji.pathinfo.taskrelpath(create_task_id))
report = 'Scratch '
else:
respath = koji.pathinfo.imagebuild(bld_info)
report = ''
report += 'livecd build results in: %s' % respath
return report
class BuildLiveMediaTask(BuildImageTask):
Methods = ['livemedia']
def handler(self, name, version, arches, target, ksfile, opts=None):
"""Governing task for building live media"""
target_info = self.session.getBuildTarget(target, strict=True)
build_tag = target_info['build_tag']
repo_info = self.getRepo(build_tag)
# check requested arch against build tag
buildconfig = self.session.getBuildConfig(build_tag)
if not buildconfig['arches']:
raise koji.BuildError("No arches for tag %(name)s [%(id)s]" % buildconfig)
tag_archlist = [koji.canonArch(a) for a in buildconfig['arches'].split()]
# check arches and remove duplicates
arches = set(arches)
for arch in arches:
if koji.canonArch(arch) not in tag_archlist:
raise koji.BuildError("Invalid arch for build tag: %s" % arch)
if not opts:
opts = {}
if not image_enabled:
# XXX - are these still required here?
self.logger.error("Missing the following dependencies: "
"pykickstart, pycdio, and possibly python-hashlib")
raise koji.PreBuildError('Live Media functions not available')
# build the image
bld_info = None
try:
release = opts.get('release')
if not opts.get('scratch'):
bld_info = self.initImageBuild(name, version, release,
target_info, opts)
release = bld_info['release']
subtasks = {}
canfail = []
for arch in arches:
subtasks[arch] = self.subtask('createLiveMedia',
[name, version, release,
arch, target_info,
build_tag, repo_info, ksfile, opts],
label='livemedia %s' % arch, arch=arch)
if arch in opts.get('optional_arches', []):
canfail.append(subtasks[arch])
self.logger.debug("Tasks that can fail: %r", canfail)
self.logger.debug("Got image subtasks: %r", subtasks)
self.logger.debug("Waiting on livemedia subtasks...")
results = self.wait(to_list(subtasks.values()), all=True,
failany=True, canfail=canfail)
# if everything failed, fail even if all subtasks are in canfail
self.logger.debug('subtask results: %r', results)
all_failed = True
for result in results.values():
if not isinstance(result, dict) or 'faultCode' not in result:
all_failed = False
break
if all_failed:
raise koji.GenericError("all subtasks failed")
# determine ignored arch failures
ignored_arches = set()
for arch in arches:
if arch in opts.get('optional_arches', []):
task_id = subtasks[arch]
result = results[task_id]
if isinstance(result, dict) and 'faultCode' in result:
ignored_arches.add(arch)
# wrap each image an RPM if needed
spec_url = opts.get('specfile')
if spec_url:
wrapper_tasks = {}
for arch in arches:
subtask_id = subtasks[arch]
result = results[subtask_id]
tinfo = self.session.getTaskInfo(subtask_id)
if arch in ignored_arches:
continue
arglist = [spec_url, target_info, bld_info, tinfo,
{'repo_id': repo_info['id']}]
wrapper_tasks[arch] = self.subtask('wrapperRPM', arglist,
label='wrapper %s' % arch, arch='noarch')
results2 = self.wait(to_list(wrapper_tasks.values()), all=True, failany=True)
self.logger.debug('wrapper results: %r', results2)
# add wrapper rpm results into main results
for arch in arches:
if arch in ignored_arches:
continue
result = results[subtasks[arch]]
result2 = results2[wrapper_tasks[arch]]
result['rpmresults'] = result2
# re-key results for xmlrpc friendliness
results = dict([(str(k), results[k]) for k in results])
# import it (and move)
if not opts.get('scratch'):
self.session.host.completeImageBuild(self.id, bld_info['id'], results)
else:
self.session.host.moveImageBuildToScratch(self.id, results)
except (SystemExit, ServerExit, KeyboardInterrupt):
# we do not trap these
raise
except Exception:
if not opts.get('scratch'):
# scratch builds do not get imported
if bld_info:
self.session.host.failBuild(self.id, bld_info['id'])
# reraise the exception
raise
# tag it if necessary
if not opts.get('scratch') and not opts.get('skip_tag'):
tag_task_id = self.session.host.subtask(method='tagBuild',
arglist=[target_info['dest_tag'],
bld_info['id'], False, None, True],
label='tag', parent=self.id, arch='noarch')
self.wait(tag_task_id)
# report the results
if opts.get('scratch'):
respath = ', '.join(
[os.path.join(koji.pathinfo.work(),
koji.pathinfo.taskrelpath(tid)) for tid in subtasks.values()])
report = 'Scratch '
else:
respath = koji.pathinfo.imagebuild(bld_info)
report = ''
report += 'livecd build results in: %s' % respath
return report
# A generic task for building cd or disk images using chroot-based tools.
# Other chroot-based image handlers should inherit this.
class ImageTask(BaseTaskHandler):
Methods = []
# default to bind mounting /dev, but allow subclasses to change
# this
bind_opts = {'dirs': {'/dev': '/dev', }}
def makeImgBuildRoot(self, buildtag, repoinfo, arch, inst_group):
"""
Create and prepare the chroot we're going to build an image in.
Binds necessary directories and creates needed device files.
@args:
buildtag: a build tag
repoinfo: a session.getRepo() object
arch: a canonical architecture name
inst_group: a string representing the yum group to install with
@returns: a buildroot object
"""
rootopts = {'install_group': inst_group,
'setup_dns': True,
'repo_id': repoinfo['id']}
if self.bind_opts:
rootopts['bind_opts'] = self.bind_opts
broot = BuildRoot(self.session, self.options, buildtag, arch, self.id, **rootopts)
broot.workdir = self.workdir
# create the mock chroot
self.logger.debug("Initializing image buildroot")
broot.init()
self.logger.debug("Image buildroot ready: " + broot.rootdir())
return broot
def fetchKickstart(self, broot, ksfile, build_tag):
"""
Retrieve the kickstart file we were given (locally or remotely) and
upload it.
Note that if the KS file existed locally, then "ksfile" is a relative
path to it in the /mnt/koji/work directory. If not, then it is still
the parameter the user passed in initially, and we assume it is a
relative path in a remote scm. The user should have passed in an scm
url with --ksurl.
@args:
broot: a buildroot object
ksfile: path to a kickstart file
build_tag: build tag name
@returns: absolute path to the retrieved kickstart file
"""
scmdir = broot.tmpdir()
koji.ensuredir(scmdir)
self.logger.debug("ksfile = %s" % ksfile)
if self.opts.get('ksurl'):
scm = SCM(self.opts['ksurl'], allow_password=self.options.allow_password_in_scm_url)
scm.assert_allowed(allowed=self.options.allowed_scms,
session=self.session,
by_config=self.options.allowed_scms_use_config,
by_policy=self.options.allowed_scms_use_policy,
policy_data={
'user_id': self.taskinfo['owner'],
'channel': self.session.getChannel(self.taskinfo['channel_id'],
strict=True)['name'],
'scratch': self.opts.get('scratch')
})
logfile = os.path.join(self.workdir, 'checkout.log')
self.run_callbacks('preSCMCheckout', scminfo=scm.get_info(),
build_tag=build_tag, scratch=self.opts.get('scratch'),
buildroot=broot)
scmsrcdir = scm.checkout(scmdir, self.session, self.getUploadDir(), logfile)
self.run_callbacks("postSCMCheckout",
scminfo=scm.get_info(),
build_tag=build_tag,
scratch=self.opts.get('scratch'),
srcdir=scmsrcdir,
buildroot=broot)
kspath = os.path.join(scmsrcdir, ksfile)
else:
kspath = self.localPath("work/%s" % ksfile)
self.uploadFile(kspath) # upload the original ks file
return kspath # full absolute path to the file in the chroot
def readKickstart(self, kspath, opts):
"""
Read a kickstart file and save the ks object as a task member.
@args:
kspath: path to a kickstart file
@returns: None
"""
# XXX: If the ks file came from a local path and has %include
# macros, *-creator will fail because the included
# kickstarts were not copied into the chroot. For now we
# require users to flatten their kickstart file if submitting
# the task with a local path.
#
# Note that if an SCM URL was used instead, %include macros
# may not be a problem if the included kickstarts are present
# in the repository we checked out.
if opts.get('ksversion'):
version = ksparser.version.makeVersion(opts['ksversion'])
else:
version = ksparser.version.makeVersion()
self.ks = ksparser.KickstartParser(version)
try:
self.ks.readKickstart(kspath)
except IOError as e:
raise koji.LiveCDError("Failed to read kickstart file "
"'%s' : %s" % (kspath, e))
except kserrors.KickstartError as e:
raise koji.LiveCDError("Failed to parse kickstart file "
"'%s' : %s" % (kspath, e))
def prepareKickstart(self, repo_info, target_info, arch, broot, opts):
"""
Process the ks file to be used for controlled image generation. This
method also uploads the modified kickstart file to the task output
area.
@args:
target_info: a sesion.getBuildTarget() object
repo_info: session.getRepo() object
arch: canonical architecture name
broot: a buildroot object
kspath: absolute path to a kickstart file
@returns:
absolute path to a processed kickstart file within the buildroot
"""
# Now we do some kickstart manipulation. If the user passed in a repo
# url with --repo, then we substitute that in for the repo(s) specified
# in the kickstart file. If --repo wasn't specified, then we use the
# repo associated with the target passed in initially.
repo_class = kscontrol.dataMap[self.ks.version]['RepoData']
if not opts.get('ksrepo'):
self.ks.handler.repo.repoList = [] # delete whatever the ks file told us
if opts.get('repo'):
user_repos = opts['repo']
if isinstance(user_repos, six.string_types):
user_repos = user_repos.split(',')
index = 0
for user_repo in set(user_repos):
self.ks.handler.repo.repoList.append(repo_class(
baseurl=user_repo, name='koji-override-%i' % index))
index += 1
else:
path_info = koji.PathInfo(topdir=self.options.topurl)
repopath = path_info.repo(repo_info['id'],
target_info['build_tag_name'])
baseurl = '%s/%s' % (repopath, arch)
self.logger.debug('BASEURL: %s' % baseurl)
self.ks.handler.repo.repoList.append(repo_class(
baseurl=baseurl, name='koji-%s-%i' % (target_info['build_tag_name'],
repo_info['id'])))
# inject url if provided
if opts.get('install_tree_url'):
self.ks.handler.url(url=opts['install_tree_url'])
# Write out the new ks file. Note that things may not be in the same
# order and comments in the original ks file may be lost.
kskoji = os.path.join(broot.tmpdir(), 'koji-image-%s-%i.ks' %
(target_info['build_tag_name'], self.id))
koji.ensuredir(broot.tmpdir())
with koji._open_text_file(kskoji, 'wt') as outfile:
outfile.write(str(self.ks.handler))
# put the new ksfile in the output directory
if not os.path.exists(kskoji):
raise koji.LiveCDError("KS file missing: %s" % kskoji)
self.uploadFile(kskoji)
return broot.path_without_to_within(kskoji) # absolute path within chroot
def getBootloaderAppend(self):
"""
Return `bootloader --append`
This is passed to livemedia `--extra-boot-args`
"""
try:
return self.ks.handler.bootloader.appendLine
except AttributeError:
return
def getImagePackages(self, cachepath):
"""
Read RPM header information from the yum cache available in the
given path. Returns a list of dictionaries for each RPM included.
"""
found = False
hdrlist = []
fields = ['name', 'version', 'release', 'epoch', 'arch',
'buildtime', 'sigmd5']
for root, dirs, files in os.walk(cachepath):
for f in files:
if fnmatch(f, '*.rpm'):
pkgfile = os.path.join(root, f)
hdr = koji.get_header_fields(pkgfile, fields)
hdr['size'] = os.path.getsize(pkgfile)
hdr['payloadhash'] = koji.hex_string(hdr['sigmd5'])
del hdr['sigmd5']
hdrlist.append(hdr)
found = True
if not found:
raise koji.LiveCDError('No repos found in yum cache!')
return hdrlist
def _shortenVolID(self, name, version, release):
# Duplicated with pungi-fedora fedora.conf
# see https://pagure.io/koji/pull-request/817
substitutions = {
'Beta': 'B',
'Rawhide': 'rawh',
'Astronomy_KDE': 'AstK',
'Atomic': 'AH',
'Cinnamon': 'Cinn',
'Cloud': 'C',
'Design_suite': 'Dsgn',
'Electronic_Lab': 'Elec',
'Everything': 'E',
'Games': 'Game',
'Images': 'img',
'Jam_KDE': 'Jam',
'MATE_Compiz': 'MATE',
# Note https://pagure.io/pungi-fedora/issue/533
'Python-Classroom': 'Clss',
'Python_Classroom': 'Clss',
'Robotics': 'Robo',
'Scientific_KDE': 'SciK',
'Security': 'Sec',
'Server': 'S',
'Workstation': 'WS',
'WorkstationOstree': 'WS',
}
# Duplicated with pungi/util.py _apply_substitutions
for k, v in sorted(to_list(substitutions.items()), key=lambda x: len(x[0]), reverse=True):
if k in name:
name = name.replace(k, v)
if k in version:
version = version.replace(k, v)
if k in release:
release = release.replace(k, v)
volid = "%s-%s-%s" % (name, version, release)
# Difference: pungi treats result more than 32 characters long as
# fatal and raises an error
return volid[:32]
# ApplianceTask begins with a mock chroot, and then installs appliance-tools
# into it via the appliance-build group. appliance-creator is then executed
# in the chroot to create the appliance image.
#
class ApplianceTask(ImageTask):
Methods = ['createAppliance']
_taskWeight = 1.5
def getRootDevice(self):
"""
Return the device name for the / partition, as specified in the
kickstart file. Appliances should have this defined.
"""
for part in self.ks.handler.partition.partitions:
if part.mountpoint == '/':
return part.disk
elif part.fstype == 'btrfs':
for s in self.ks.handler.btrfs.btrfsList:
if s.subvol and s.mountpoint == '/':
return part.disk
raise koji.ApplianceError('kickstart lacks a "/" mountpoint')
def handler(self, name, version, release, arch, target_info,
build_tag, repo_info, ksfile, opts=None):
if opts is None:
opts = {}
self.opts = opts
broot = self.makeImgBuildRoot(build_tag, repo_info, arch,
'appliance-build')
kspath = self.fetchKickstart(broot, ksfile, target_info['build_tag_name'])
self.readKickstart(kspath, opts)
kskoji = self.prepareKickstart(repo_info, target_info, arch, broot, opts)
# Figure out appliance-creator arguments, let it fail if something
# is wrong.
odir = 'app-output'
opath = os.path.join(broot.tmpdir(), odir)
# arbitrary paths in chroot
cachedir = broot.tmpdir(within=True) + '/koji-appliance'
app_log = broot.tmpdir(within=True) + '/appliance.log'
os.mkdir(opath)
cmd = ['/usr/bin/appliance-creator', '-c', kskoji, '-d', '-v',
'--logfile', app_log, '--cache', cachedir, '-o', odir]
for arg_name in ('vmem', 'vcpu', 'format'):
arg = opts.get(arg_name)
if arg is not None:
cmd.extend(['--%s' % arg_name, arg])
appname = '%s-%s-%s' % (name, version, release)
cmd.extend(['--name', appname])
cmd.extend(['--version', version, '--release', release])
# Run appliance-creator
rv = broot.mock(['--cwd', broot.tmpdir(within=True), '--chroot', '--'] + cmd)
self.uploadFile(os.path.join(broot.rootdir(), app_log[1:]))
if rv:
raise koji.ApplianceError(
"Could not create appliance: %s" % parseStatus(rv, 'appliance-creator') +
"; see root.log or appliance.log for more information")
# Find the results
results = []
for directory, subdirs, files in os.walk(opath):
for f in files:
results.append(os.path.join(broot.tmpdir(),
directory, f))
self.logger.debug('output: %s' % results)
if len(results) == 0:
raise koji.ApplianceError("Could not find image build results!")
logs = ['appliance.log', os.path.basename(ksfile), os.path.basename(kskoji)]
logs.extend(broot.logs)
imgdata = {
'arch': arch,
'rootdev': self.getRootDevice(),
'task_id': self.id,
'logs': logs,
'name': name,
'version': version,
'release': release
}
imgdata['files'] = []
for ofile in results:
self.uploadFile(ofile)
imgdata['files'].append(os.path.basename(ofile))
# TODO: get file manifest from the appliance
if not opts.get('scratch'):
hdrlist = self.getImagePackages(os.path.join(broot.rootdir(),
cachedir[1:]))
broot.markExternalRPMs(hdrlist)
imgdata['rpmlist'] = hdrlist
broot.expire()
return imgdata
# LiveCDTask begins with a mock chroot, and then installs livecd-tools into it
# via the livecd-build group. livecd-creator is then executed in the chroot
# to create the LiveCD image.
#
class LiveCDTask(ImageTask):
Methods = ['createLiveCD']
_taskWeight = 1.5
def genISOManifest(self, image, manifile):
"""
Using iso9660 from pycdio, get the file manifest of the given image,
and save it to the text file manifile.
"""
fd = koji._open_text_file(manifile, 'wt')
if not fd:
raise koji.GenericError(
'Unable to open manifest file (%s) for writing!' % manifile)
iso = iso9660.ISO9660.IFS(source=image)
if not iso.is_open():
raise koji.GenericError(
'Could not open %s as an ISO-9660 image!' % image)
# image metadata
id = iso.get_application_id()
if id is not None:
fd.write("Application ID: %s\n" % id)
id = iso.get_preparer_id()
if id is not None:
fd.write("Preparer ID: %s\n" % id)
id = iso.get_publisher_id()
if id is not None:
fd.write("Publisher ID: %s\n" % id)
id = iso.get_system_id()
if id is not None:
fd.write("System ID: %s\n" % id)
id = iso.get_volume_id()
if id is not None:
fd.write("Volume ID: %s\n" % id)
id = iso.get_volumeset_id()
if id is not None:
fd.write("Volumeset ID: %s\n" % id)
fd.write('\nSize(bytes) File Name\n')
manifest = self.listISODir(iso, '/')
for a_file in manifest:
fd.write(a_file)
fd.close()
iso.close()
def listISODir(self, iso, path):
"""
Helper function called recursively by genISOManifest. Returns a
listing of files/directories at the given path in an iso image obj.
"""
manifest = []
file_stats = iso.readdir(path)
for stat in file_stats:
filename = stat[0]
size = stat[2]
is_dir = stat[4] == 2
if filename == '..':
continue
elif filename == '.':
# path should always end in a trailing /
filepath = path
else:
filepath = path + filename
# identify directories with a trailing /
if is_dir:
filepath += '/'
if is_dir and filename != '.':
# recurse into subdirectories
manifest.extend(self.listISODir(iso, filepath))
else:
# output information for the current directory and files
manifest.append("%-10d %s\n" % (size, filepath))
return manifest
def handler(self, name, version, release, arch, target_info,
build_tag, repo_info, ksfile, opts=None):
if opts is None:
opts = {}
self.opts = opts
broot = self.makeImgBuildRoot(build_tag, repo_info, arch,
'livecd-build')
kspath = self.fetchKickstart(broot, ksfile, target_info['build_tag_name'])
self.readKickstart(kspath, opts)
kskoji = self.prepareKickstart(repo_info, target_info, arch, broot, opts)
# arbitrary paths in chroot
cachedir = broot.tmpdir(within=True) + '/koji-livecd'
livecd_log = broot.tmpdir(within=True) + '/livecd.log'
cmd = ['/usr/bin/livecd-creator', '-c', kskoji, '-d', '-v',
'--logfile', livecd_log, '--cache', cachedir]
isoname = '%s-%s-%s' % (name, version, release)
volid = opts.get('volid')
if not volid:
volid = self._shortenVolID(name, version, release)
if len(volid) > 32:
raise koji.LiveCDError('volume ID is longer than 32 characters')
cmd.extend(['-f', volid])
# Run livecd-creator
rv = broot.mock(['--cwd', broot.tmpdir(within=True), '--chroot', '--'] + cmd)
self.uploadFile(os.path.join(broot.rootdir(), livecd_log[1:]))
if rv:
raise koji.LiveCDError(
'Could not create LiveCD: %s' % parseStatus(rv, 'livecd-creator') +
'; see root.log or livecd.log for more information')
# Find the resultant iso
# The cwd of the livecd-creator process is tmpdir() in the chroot, so
# that is where it writes the .iso
files = os.listdir(broot.tmpdir())
isofile = None
for afile in files:
if afile.endswith('.iso'):
if not isofile:
isofile = afile
else:
raise koji.LiveCDError(
'multiple .iso files found: %s and %s' % (isofile, afile))
if not isofile:
raise koji.LiveCDError('could not find iso file in chroot')
isosrc = os.path.join(broot.tmpdir(), isofile)
# copy the iso out of the chroot. If we were given an isoname,
# this is where the renaming happens.
self.logger.debug('uploading image: %s' % isosrc)
isoname += '.iso'
# Generate the file manifest of the image, upload the results
manifest = os.path.join(broot.resultdir(), 'manifest.log')
self.genISOManifest(isosrc, manifest)
self.uploadFile(manifest)
self.uploadFile(isosrc, remoteName=isoname)
logs = ['livecd.log', os.path.basename(ksfile), os.path.basename(kskoji)]
logs.extend(broot.logs)
imgdata = {'arch': arch,
'files': [isoname],
'rootdev': None,
'task_id': self.id,
'logs': logs,
'name': name,
'version': version,
'release': release
}
if not opts.get('scratch'):
hdrlist = self.getImagePackages(os.path.join(broot.rootdir(),
cachedir[1:]))
imgdata['rpmlist'] = hdrlist
broot.markExternalRPMs(hdrlist)
broot.expire()
return imgdata
# livemedia-creator
class LiveMediaTask(ImageTask):
Methods = ['createLiveMedia']
_taskWeight = 1.5
# For livemedia-creator we do not want to bind mount /dev, see
# https://bugzilla.redhat.com/show_bug.cgi?id=1315541
bind_opts = {}
def fetch_lorax_templates_from_scm(self, build_root):
"""
Checkout the lorax templates from SCM for use by livemedia-creator.
This will make a checkout of the lorax templates from SCM so that they
may be passed to livemedia-creator. Here we are operating outside the
chroot of the BuildRoot. The following options are essential:
- lorax_url points to the SCM containing the templates.
- lorax_dir provides a relative reference to the templates within
the checkout.
:param build_root:
The BuildRoot instance to receive the checkout.
:return:
An absolute path (from within the chroot) to where livemedia-creator
can find the checked out templates.
"""
scm = SCM(self.opts['lorax_url'], allow_password=self.options.allow_password_in_scm_url)
scm.assert_allowed(allowed=self.options.allowed_scms,
session=self.session,
by_config=self.options.allowed_scms_use_config,
by_policy=self.options.allowed_scms_use_policy,
policy_data={
'user_id': self.taskinfo['owner'],
'channel': self.session.getChannel(self.taskinfo['channel_id'],
strict=True)['name'],
'scratch': self.opts.get('scratch')
})
logfile = os.path.join(self.workdir, 'lorax-templates-checkout.log')
checkout_dir = scm.checkout(build_root.tmpdir(),
self.session, self.getUploadDir(), logfile)
return os.path.join(build_root.path_without_to_within(checkout_dir),
self.opts['lorax_dir'])
def genISOManifest(self, image, manifile):
"""
Using iso9660 from pycdio, get the file manifest of the given image,
and save it to the text file manifile.
"""
fd = koji._open_text_file(manifile, 'wt')
if not fd:
raise koji.GenericError(
'Unable to open manifest file (%s) for writing!' % manifile)
iso = iso9660.ISO9660.IFS(source=image)
if not iso.is_open():
raise koji.GenericError(
'Could not open %s as an ISO-9660 image!' % image)
# image metadata
id = iso.get_application_id()
if id is not None:
fd.write("Application ID: %s\n" % id)
id = iso.get_preparer_id()
if id is not None:
fd.write("Preparer ID: %s\n" % id)
id = iso.get_publisher_id()
if id is not None:
fd.write("Publisher ID: %s\n" % id)
id = iso.get_system_id()
if id is not None:
fd.write("System ID: %s\n" % id)
id = iso.get_volume_id()
if id is not None:
fd.write("Volume ID: %s\n" % id)
id = iso.get_volumeset_id()
if id is not None:
fd.write("Volumeset ID: %s\n" % id)
fd.write('\nSize(bytes) File Name\n')
manifest = self.listISODir(iso, '/')
for a_file in manifest:
fd.write(a_file)
fd.close()
iso.close()
def listISODir(self, iso, path):
"""
Helper function called recursively by genISOManifest. Returns a
listing of files/directories at the given path in an iso image obj.
"""
manifest = []
file_stats = iso.readdir(path)
for stat in file_stats:
filename = stat[0]
size = stat[2]
is_dir = stat[4] == 2
if filename == '..':
continue
elif filename == '.':
# path should always end in a trailing /
filepath = path
else:
filepath = path + filename
# identify directories with a trailing /
if is_dir:
filepath += '/'
if is_dir and filename != '.':
# recurse into subdirectories
manifest.extend(self.listISODir(iso, filepath))
else:
# output information for the current directory and files
manifest.append("%-10d %s\n" % (size, filepath))
return manifest
def handler(self, name, version, release, arch, target_info,
build_tag, repo_info, ksfile, opts=None):
if opts is None:
opts = {}
self.opts = opts
broot = self.makeImgBuildRoot(build_tag, repo_info, arch,
'livemedia-build')
kspath = self.fetchKickstart(broot, ksfile, target_info['build_tag_name'])
self.readKickstart(kspath, opts)
kskoji = self.prepareKickstart(repo_info, target_info, arch, broot, opts)
b_append = self.getBootloaderAppend()
# arbitrary paths in chroot
livemedia_log = broot.tmpdir(within=True) + '/lmc-logs/livemedia-out.log'
resultdir = broot.tmpdir(within=True) + '/lmc'
# Common LMC command setup, needs extending
cmd = ['/sbin/livemedia-creator',
'--ks', kskoji,
'--logfile', livemedia_log,
'--no-virt',
'--resultdir', resultdir,
'--project', name,
# '--tmp', '/tmp'
]
volid = opts.get('volid')
if not volid:
volid = self._shortenVolID(name, version, release)
if len(volid) > 32:
raise koji.LiveMediaError('volume ID is longer than 32 characters')
# note: at the moment, we are only generating live isos. We may add support
# for other types in the future
cmd.extend(['--make-iso',
'--volid', volid,
'--iso-only',
])
isoname = '%s-%s-%s-%s.iso' % (name, arch, version, release)
cmd.extend(['--iso-name', isoname,
'--releasever', version,
])
if arch == 'x86_64' and not opts.get('nomacboot'):
cmd.append('--macboot')
else:
cmd.append('--nomacboot')
if b_append:
cmd.extend(['--extra-boot-args', '\"%s\"' % b_append])
if 'lorax_url' in self.opts:
templates_dir = self.fetch_lorax_templates_from_scm(broot)
cmd.extend(['--lorax-templates', templates_dir])
if self.opts.get('squashfs_only'):
cmd.append('--squashfs-only')
if isinstance(self.opts.get('compress_arg'), (list, tuple)):
for com_arg in self.opts['compress_arg']:
cmd.extend(['--compress-arg', com_arg])
# Run livemedia-creator
rv = broot.mock(['--cwd', broot.tmpdir(within=True), '--chroot', '--'] + cmd)
# upload logs
logdirs = [
os.path.join(broot.tmpdir(), 'lmc-logs'),
os.path.join(broot.tmpdir(), 'lmc-logs/anaconda'),
]
for logdir in logdirs:
if not os.path.isdir(logdir):
continue
for filename in os.listdir(logdir):
if not filename.endswith('.log'):
continue
filepath = os.path.join(logdir, filename)
if os.stat(filepath).st_size == 0:
continue
# avoid file duplication between directories by prefixing anaconda logs
if logdir.endswith('anaconda'):
self.uploadFile(os.path.join(filepath), remoteName='anaconda-%s' % filename)
continue
self.uploadFile(os.path.join(filepath))
if rv:
raise koji.LiveMediaError(
'Could not create LiveMedia: %s' % parseStatus(rv, 'livemedia-creator') +
'; see root.log or livemedia-out.log for more information')
# Find the resultant iso
# The cwd of the livemedia-creator process is broot.tmpdir() in the chroot, so
# that is where it writes the .iso
rootresultsdir = os.path.join(broot.rootdir(), resultdir.lstrip('/'))
files = os.listdir(rootresultsdir)
isofile = None
for afile in files:
if afile.endswith('.iso'):
if not isofile:
isofile = afile
else:
raise koji.LiveMediaError(
'multiple .iso files found: %s and %s' % (isofile, afile))
if not isofile:
raise koji.LiveMediaError('could not find iso file in chroot')
isosrc = os.path.join(rootresultsdir, isofile)
# Generate the file manifest of the image, upload the results
manifest = os.path.join(broot.resultdir(), 'manifest.log')
self.genISOManifest(isosrc, manifest)
self.uploadFile(manifest)
self.logger.debug('uploading image: %s' % isosrc)
self.uploadFile(isosrc, remoteName=isoname)
logs = ['livemedia-out.log', os.path.basename(ksfile), os.path.basename(kskoji)]
logs.extend(broot.logs)
imgdata = {'arch': arch,
'files': [isoname],
'rootdev': None,
'task_id': self.id,
'logs': logs,
'name': name,
'version': version,
'release': release
}
if not opts.get('scratch'):
# TODO - generate list of rpms in image
# (getImagePackages doesn't work here)
# hdrlist = self.getImagePackages(os.path.join(broot.rootdir(),
# cachedir[1:]))
imgdata['rpmlist'] = []
# broot.markExternalRPMs(hdrlist)
broot.expire()
return imgdata
# A generic task for building disk images using Oz
# Other Oz-based image handlers should inherit this.
class OzImageTask(BaseTaskHandler):
Methods = []
supported_formats = {}
def fetchKickstart(self, build_tag):
"""
Retrieve the kickstart file we were given (locally or remotely) and
upload it to the hub.
Note that if the KS file existed locally, then "ksfile" is a relative
path to it in the /mnt/koji/work directory. If not, then it is still
the parameter the user passed in initially, and we assume it is a
relative path in a remote scm. The user should have passed in an scm
url with --ksurl.
@args: build_tag: build tag name
use self.opts for options
@returns:
absolute path to the retrieved kickstart file
"""
ksfile = self.opts.get('kickstart')
self.logger.debug("ksfile = %s" % ksfile)
if self.opts.get('ksurl'):
scm = SCM(self.opts['ksurl'], allow_password=self.options.allow_password_in_scm_url)
scm.assert_allowed(allowed=self.options.allowed_scms,
session=self.session,
by_config=self.options.allowed_scms_use_config,
by_policy=self.options.allowed_scms_use_policy,
policy_data={
'user_id': self.taskinfo['owner'],
'channel': self.session.getChannel(self.taskinfo['channel_id'],
strict=True)['name'],
'scratch': self.opts.get('scratch')
})
logfile = os.path.join(self.workdir, 'checkout-%s.log' % self.arch)
self.run_callbacks('preSCMCheckout', scminfo=scm.get_info(),
build_tag=build_tag, scratch=self.opts.get('scratch'))
scmsrcdir = scm.checkout(self.workdir, self.session,
self.getUploadDir(), logfile)
self.run_callbacks("postSCMCheckout",
scminfo=scm.get_info(),
build_tag=build_tag,
scratch=self.opts.get('scratch'),
srcdir=scmsrcdir)
kspath = os.path.join(scmsrcdir, os.path.basename(ksfile))
else:
tops = dict([(k, getattr(self.options, k)) for k in ('topurl', 'topdir')])
tops['tempdir'] = self.workdir
with koji.openRemoteFile(ksfile, **tops) as ks_src:
kspath = os.path.join(self.workdir, os.path.basename(ksfile))
with open(kspath, 'wb') as ks_dest:
ks_dest.write(ks_src.read())
self.logger.debug('uploading kickstart from here: %s' % kspath)
self.uploadFile(kspath) # upload the original ks file
return kspath # absolute path to the ks file
def readKickstart(self, kspath):
"""
Read a kickstart file and save the ks object as a task member.
@args:
kspath: path to a kickstart file
@returns:
a kickstart object returned by pykickstart
"""
# XXX: If the ks file came from a local path and has %include
# macros, Oz will fail because it can only handle flat files.
# We require users to flatten their kickstart file.
if self.opts.get('ksversion'):
version = ksparser.version.makeVersion(self.opts['ksversion'])
else:
version = ksparser.version.makeVersion()
ks = ksparser.KickstartParser(version)
self.logger.debug('attempting to read kickstart: %s' % kspath)
try:
ks.readKickstart(kspath)
except IOError as e:
raise koji.BuildError("Failed to read kickstart file "
"'%s' : %s" % (kspath, e))
except kserrors.KickstartError as e:
raise koji.BuildError("Failed to parse kickstart file "
"'%s' : %s" % (kspath, e))
return ks
def prepareKickstart(self, kspath, install_tree):
"""
Process the ks file to be used for controlled image generation. This
method also uploads the modified kickstart file to the task output
area on the hub.
@args:
kspath: a path to a kickstart file
@returns:
a kickstart object with koji-specific modifications
"""
ks = self.readKickstart(kspath)
# Now we do some kickstart manipulation. If the user passed in a repo
# url with --repo, then we substitute that in for the repo(s) specified
# in the kickstart file. If --repo wasn't specified, then we use the
# repo associated with the target passed in initially.
ks.handler.repo.repoList = [] # delete whatever the ks file told us
repo_class = kscontrol.dataMap[ks.version]['RepoData']
# only use noverifyssl if allowed in kojid.conf
if self.opts.get('noverifyssl') and not self.options.allow_noverifyssl:
raise koji.BuildError("noverifyssl option is not enabled")
noverifyssl = self.options.allow_noverifyssl and self.opts.get('noverifyssl')
# TODO: sensibly use "url" and "repo" commands in kickstart
if self.opts.get('repo'):
# the user used --repo at least once
user_repos = self.opts.get('repo')
index = 0
for user_repo in set(user_repos):
repo_url = user_repo.replace('$arch', self.arch)
ks.handler.repo.repoList.append(repo_class(
baseurl=repo_url, name='koji-override-%i' % index,
noverifyssl=noverifyssl))
index += 1
else:
# --repo was not given, so we use the target's build repo
path_info = koji.PathInfo(topdir=self.options.topurl)
repopath = path_info.repo(self.repo_info['id'],
self.target_info['build_tag_name'])
baseurl = '%s/%s' % (repopath, self.arch)
self.logger.debug('BASEURL: %s' % baseurl)
ks.handler.repo.repoList.append(repo_class(
baseurl=baseurl, name='koji-override-0',
noverifyssl=noverifyssl))
# inject the URL of the install tree into the kickstart
ks.handler.url(url=install_tree, noverifyssl=noverifyssl)
return ks
def writeKickstart(self, ksobj, ksname):
"""
Write out the new ks file. Note that things may not be in the same
order and comments in the original ks file may be lost.
@args:
ksobj: a pykickstart object of what we want to write
ksname: file name for the kickstart
@returns:
an absolute path to the kickstart file we wrote
"""
kspath = os.path.join(self.workdir, ksname)
with koji._open_text_file(kspath, 'wt') as outfile:
outfile.write(str(ksobj.handler))
# put the new ksfile in the output directory
if not os.path.exists(kspath):
raise koji.BuildError("KS file missing: %s" % kspath)
self.uploadFile(kspath) # upload the modified ks file
return kspath
def makeConfig(self):
"""
Generate a configuration dict for ImageFactory. This will override
anything in the /etc config files. We do this forcibly so that it is
impossible for Koji to use any image caches or leftover metadata from
other images created by the service.
@args: none
@returns:
a dictionary used for configuring ImageFactory to built an image
the way we want
"""
return {
# Oz specific
'oz_data_dir': os.path.join(self.workdir, 'oz_data'),
'oz_screenshot_dir': os.path.join(self.workdir, 'oz_screenshots'),
# IF specific
'imgdir': os.path.join(self.workdir, 'scratch_images'),
'tmpdir': os.path.join(self.workdir, 'oz-tmp'),
'verbose': True,
'timeout': self.options.oz_install_timeout or None,
'output': 'log',
'raw': False,
'debug': True,
'image_manager': 'file',
'plugins': '/etc/imagefactory/plugins.d',
'rhevm_image_format': 'qcow2',
'tdl_require_root_pw': False,
'image_manager_args': {
'storage_path': os.path.join(self.workdir, 'output_image')},
}
def makeTemplate(self, name, inst_tree):
"""
Generate a simple "TDL" for ImageFactory to build an image with.
@args:
name: a name for the image
inst_tree: a string, a URL to the install tree (a compose)
@returns:
An XML string that imagefactory can consume
"""
# we have to split this up so the variable substitution works
# XXX: using a packages section (which we don't) will have IF boot the
# image and attempt to ssh in. This breaks docker image creation.
# TODO: intelligently guess the distro based on the install tree URL
distname, distver = self.parseDistro(self.opts.get('distro'))
if self.arch in ['armhfp', 'armv7hnl', 'armv7hl']:
arch = 'armv7l'
else:
arch = self.arch
template = """<template>
<name>%s</name>
<os>
<name>%s</name>
<version>%s</version>
<arch>%s</arch>
<install type='url'>
<url>%s</url>
</install>
""" % (name, distname, distver, arch, inst_tree)
template += ("<icicle>\n"
" <extra_command>rpm -qa --qf"
" '%{NAME},%{VERSION},%{RELEASE},%{ARCH},%{EPOCH},%{SIZE},%{SIGMD5},"
"%{BUILDTIME}\\n'</extra_command>\n"
" </icicle>\n"
" ")
# TODO: intelligently guess the size based on the kickstart file
template += """</os>
<description>%s OS</description>
<disk>
<size>%s</size>
</disk>
</template>
""" % (name, self.opts.get('disk_size')) # noqa: E501
return template
def parseDistro(self, distro):
"""
Figure out the distribution name and version we are going to build an
image on.
args:
a string of the form: RHEL-X.Y, Fedora-NN, CentOS-X.Y, or SL-X.Y
returns:
a 2-element list, depends on the distro where the split happened
"""
if distro.startswith('RHEL'):
major, minor = distro.split('.')
if major == 'RHEL-5':
minor = 'U' + minor
return major, minor
elif distro.startswith('Fedora'):
return distro.split('-')
elif distro.startswith('CentOS'):
return distro.split('.')
elif distro.startswith('SL'):
return distro.split('.')
else:
raise koji.BuildError('Unknown or supported distro given: %s' % distro)
def fixImageXML(self, format, filename, xmltext):
"""
The XML generated by Oz/ImageFactory knows nothing about the name
or image format conversions Koji does. We fix those values in the
libvirt XML and write the changes out to a file, the path of which
we return.
@args:
format = raw, qcow2, vmdk, etc... a string representation
filename = the name of the XML file we will save this too
xmltext = the libvirt XML to start with
@return:
an absolute path to the modified XML
"""
newxml = xml.dom.minidom.parseString(xmltext) # nosec
ename = newxml.getElementsByTagName('name')[0]
ename.firstChild.nodeValue = self.imgname
esources = newxml.getElementsByTagName('source')
for e in esources:
if e.hasAttribute('file'):
e.setAttribute('file', '%s.%s' % (self.imgname, format))
edriver = newxml.getElementsByTagName('driver')[0]
edriver.setAttribute('type', format)
if not self.supported_formats.get(format, {}).get('qemu'):
edriver.setAttribute('type-warning',
"%s is not qemu-supported format, "
"you need to convert image before use "
"and update driver+source accordingly." % format)
xml_path = os.path.join(self.workdir, filename)
with koji._open_text_file(xml_path, 'wt') as xmlfd:
xmlfd.write(newxml.toprettyxml())
return xml_path
def getScreenshot(self):
"""
Locate a screenshot taken by libvirt in the case of build failure,
if it exists. If it does, return the path, else return None.
@args: none
@returns: a path to a screenshot take by libvirt
"""
shotdir = os.path.join(self.workdir, 'oz_screenshots')
screenshot = None
found = glob.glob(os.path.join(shotdir, '*.ppm'))
if len(found) > 0:
screenshot = found[0]
found = glob.glob(os.path.join(shotdir, '*.png'))
if len(found) > 0:
screenshot = found[0]
return screenshot
class BaseImageTask(OzImageTask):
Methods = ['createImage']
_taskWeight = 2.0
def __init__(self, *args, **kwargs):
super(BaseImageTask, self).__init__(*args, **kwargs)
'''
format: {
'qemu': bool -supported format by qemu
'fcall': function to handle creation of this format
}
'''
self.supported_formats = {
'docker': {'qemu': False, 'fcall': self._buildDocker},
'liveimg-squashfs': {'qemu': False, 'fcall': self._buildSquashfs},
'qcow': {'qemu': True, 'fcall': self._buildConvert},
'qcow2': {'qemu': True, 'fcall': self._buildConvert},
'raw': {'qemu': True, 'fcall': self._buildBase},
'raw-xz': {'qemu': False, 'fcall': self._buildXZ},
'rhevm-ova': {'qemu': False, 'fcall': self._buildOVA},
'tar-gz': {'qemu': False, 'fcall': self._buildTarGZ},
'vagrant-hyperv': {'qemu': False, 'fcall': self._buildOVA},
'vagrant-libvirt': {'qemu': False, 'fcall': self._buildOVA},
'vagrant-virtualbox': {'qemu': False, 'fcall': self._buildOVA},
'vagrant-vmware-fusion': {'qemu': False, 'fcall': self._buildOVA},
'vdi': {'qemu': True, 'fcall': self._buildConvert},
'vmdk': {'qemu': True, 'fcall': self._buildConvert},
'vpc': {'qemu': True, 'fcall': self._buildConvert},
'vsphere-ova': {'qemu': False, 'fcall': self._buildOVA},
}
def _format_deps(self, formats):
"""
Return a dictionary where the keys are the image formats we need to
build/convert, and the values are booleans that indicate whether the
output should be included in the task results.
Some image formats require others to be processed first, which is why
we have to do this. raw files in particular may not be kept.
"""
for f in formats:
if f not in self.supported_formats.keys():
raise koji.ApplianceError('Invalid format: %s' % f)
f_dict = dict((f, True) for f in formats)
# If a user requests 1 or more image formats (with --format) we do not
# by default include the raw disk image in the results, because it is
# 10G in size. To override this behavior, the user must specify
# "--format raw" in their command. If --format was not used at all,
# then we do include the raw disk image by itself.
if len(formats) == 0:
# we only want a raw disk image (no format option given)
f_dict['raw'] = True
elif 'raw' not in f_dict:
f_dict['raw'] = False
self.logger.debug('Image delivery plan: %s' % f_dict)
return f_dict
def do_images(self, ks, template, inst_tree):
"""
Call out to ImageFactory to build the image(s) we want. Returns a dict
of details for each image type we had to ask ImageFactory to build
"""
# add a handler to the logger so that we capture ImageFactory's logging
self.fhandler = logging.FileHandler(self.ozlog)
self.bd = BuildDispatcher()
self.tlog = logging.getLogger()
self.tlog.setLevel(logging.DEBUG)
self.tlog.addHandler(self.fhandler)
images = {}
random.seed() # necessary to ensure a unique mac address
params = {'install_script': str(ks.handler),
'offline_icicle': True}
# build the base (raw) image
self.base_img = self._buildBase(template, params)
images['raw'] = {'image': self.base_img.base_image.data,
'icicle': self.base_img.base_image.icicle}
# Do the rest of the image types (everything but raw)
for format in self.formats:
if format == 'raw':
continue
self.logger.info('dispatching %s image builder' % format)
images[format] = self.supported_formats[format]['fcall'](format)
imginfo = self._processXML(images)
self.tlog.removeHandler(self.fhandler)
self.uploadFile(self.ozlog)
return imginfo
def _processXML(self, images):
"""
Produce XML that libvirt can import to create a domain based on image(s)
we produced. We save the location of the XML file in the dictionary
it corresponds to here.
@args:
images - a dict where the keys are image formats, and the values
are dicts with details about the image (location, icicle, etc)
@returns:
a dictionary just like "images" but with a new key called "libvirt"
that points to the path of the XML file for that image
"""
imginfo = {}
for fmt in images:
imginfo[fmt] = images[fmt]
lxml = self.fixImageXML(fmt, 'libvirt-%s-%s.xml' % (fmt, self.arch),
self.base_img.base_image.parameters['libvirt_xml'])
imginfo[fmt]['libvirt'] = lxml
return imginfo
def _checkImageState(self, image):
"""
Query ImageFactory for details of a dispatched image build. If it is
FAILED we raise an exception.
@args:
image - a build dispatcher object returned by a BuildDispatcher
@returns: nothing
"""
if image.target_image:
status = image.target_image.status
details = image.target_image.status_detail['error']
else:
status = image.base_image.status
details = image.base_image.status_detail['error']
self.logger.debug('check image results: %s' % status)
if status == 'FAILED':
scrnshot = self.getScreenshot()
if scrnshot:
ext = scrnshot[-3:]
self.uploadFile(scrnshot, remoteName='screenshot.%s' % ext)
if image.os_plugin:
image.os_plugin.abort() # forcibly tear down the VM
# TODO abort when a task is CANCELLED
if not self.session.checkUpload('', os.path.basename(self.ozlog)):
self.tlog.removeHandler(self.fhandler)
self.uploadFile(self.ozlog)
if 'No disk activity' in details:
details = 'Automated install failed or prompted for input. ' \
'See the screenshot in the task results for more information'
raise koji.ApplianceError('Image status is %s: %s' %
(status, details))
def _mergeFactoryParams(self, img_opts, fixed_params):
"""
Merge any KV pairs passed in via --factory-parameter CLI options
into an existing dictionary that will eventually be passed to
factory build commands. This allows a fairly generic mechanism
for parameter passthrough to plugins and the base builder that does
not require patching the builder and CLI each time.
@args:
img_opts - an existing dict with any pre-existing parameters
fixed_params - list of parameters that must not be overridden
@returns:
nothing - dict is modified in place
"""
if self.opts.get('factory_parameter'):
for kvp in self.opts.get('factory_parameter'):
if kvp[0] not in fixed_params:
img_opts[kvp[0]] = kvp[1]
def _buildBase(self, template, params, wait=True):
"""
Build a base image using ImageFactory. This is a "raw" image.
@args:
template - an XML string for the TDL
params - a dict that controls some ImageFactory settings
wait - call join() on the building thread if True
@returns:
a dict with some metadata about the image (includes an icicle)
"""
# TODO: test the failure case where IF itself throws an exception
# ungracefully (missing a plugin for example)
# may need to still upload ozlog and remove the log handler
self.logger.info('dispatching a baseimg builder')
self.logger.debug('templates: %s' % template)
self.logger.debug('pre-merge params: %s' % params)
# We enforce various things related to the ks file - do not allow override
self._mergeFactoryParams(params, ['install_script'])
self.logger.debug('post-merge params: %s' % params)
base = self.bd.builder_for_base_image(template, parameters=params)
if wait:
base.base_thread.join()
self._checkImageState(base)
return base
def _buildXZ(self, format):
"""
Use xz to compress a raw disk image. This is very straightforward.
@args:
format - a string representing the image format, "raw-xz"
@returns:
a dict with some metadata about the image
"""
newimg = os.path.join(self.workdir, self.imgname + '.raw.xz')
rawimg = os.path.join(self.workdir, self.imgname + '.raw')
cmd = ['/bin/cp', self.base_img.base_image.data, rawimg]
conlog = os.path.join(self.workdir,
'xz-cp-%s-%s.log' % (format, self.arch))
log_output(self.session, cmd[0], cmd, conlog, self.getUploadDir(),
logerror=1)
cmd = ['/usr/bin/xz', self.options.xz_options, rawimg]
conlog = os.path.join(self.workdir,
'xz-%s-%s.log' % (format, self.arch))
log_output(self.session, cmd[0], cmd, conlog, self.getUploadDir(),
logerror=1)
return {'image': newimg}
def _buildTarGZ(self, format):
"""
Use tar and gzip to compress a raw disk image.
@args:
format - not used, we only handle tar-gz
@returns:
a dict with some metadata about the image
"""
orig = self.base_img.base_image.data
newimg = os.path.join(self.workdir, self.imgname + '.tar.gz')
# see also: https://cloud.google.com/compute/docs/creating-custom-image
# the image in the tarball must be named disk.raw
imgdir = os.path.dirname(orig)
rawimg = os.path.join(imgdir, 'disk.raw')
os.link(orig, rawimg)
# make the tarball
cmd = ['/bin/tar', '-Sczvf', newimg, 'disk.raw']
conlog = os.path.join(self.workdir, 'tar-gz-%s.log' % self.arch)
log_output(self.session, cmd[0], cmd, conlog, self.getUploadDir(),
logerror=1, cwd=imgdir)
# now that we've made the tarball, we don't need this hardlink
os.unlink(rawimg)
return {'image': newimg}
def _buildSquashfs(self, format):
"""
Use squashfs to wrap a raw disk image into liveimg compatible image.
This can be used by dracut for booting or anaconda to install.
@args:
format - a string representing the image format, "liveimg-squashfs"
@returns:
a dict with some metadata about the image
"""
newimg = os.path.join(self.workdir, self.imgname + '.squashfs')
fsimg = os.path.join(self.workdir, 'squashfs-root/LiveOS/rootfs.img')
os.makedirs(os.path.join(self.workdir, 'squashfs-root/LiveOS'))
cmd = ['/bin/dd', 'conv=sparse', 'bs=1M',
'skip=1', # FIXME Hack to strip the disklabel
'if=%s' % self.base_img.base_image.data,
'of=%s' % fsimg]
conlog = os.path.join(self.workdir,
'squashfs-dd-%s-%s.log' % (format, self.arch))
log_output(self.session, cmd[0], cmd, conlog, self.getUploadDir(),
logerror=1)
cmd = ['/usr/sbin/mksquashfs', os.path.join(self.workdir, 'squashfs-root'),
newimg, '-comp', 'xz', '-noappend']
conlog = os.path.join(self.workdir,
'squashfs-mksquashfs-%s-%s.log' % (format, self.arch))
log_output(self.session, cmd[0], cmd, conlog, self.getUploadDir(),
logerror=1)
return {'image': newimg}
def _buildOVA(self, format):
"""
Build an OVA target image. This is a format supported by RHEV and
vSphere
@args:
format - a string representing the image format, "rhevm-ova"
@returns:
a dict with some metadata about the image
"""
img_opts = {}
if self.opts.get('ova_option'):
img_opts = dict([o.split('=') for o in self.opts.get('ova_option')])
# As far as Image Factory is concerned, vagrant boxes are just another type of OVA
# We communicate the desire for vagrant-specific formatting by adding the *_ova_format
# options and turning the underlying format option back into one of the two target
# image types ('vsphere-ova' or 'rhevm-ova') that are used to generate the intermediate
# disk image
fixed_params = []
if format == 'vagrant-virtualbox':
format = 'vsphere-ova'
img_opts['vsphere_ova_format'] = 'vagrant-virtualbox'
fixed_params = ['vsphere_ova_format']
if format == 'vagrant-libvirt':
format = 'rhevm-ova'
img_opts['rhevm_ova_format'] = 'vagrant-libvirt'
fixed_params = ['rhevm_ova_format']
if format == 'vagrant-vmware-fusion':
format = 'vsphere-ova'
img_opts['vsphere_ova_format'] = 'vagrant-vmware-fusion'
# The initial disk image transform for VMWare Fusion/Workstation requires a "standard"
# VMDK, not the stream oriented format used for VirtualBox or regular VMWare OVAs
img_opts['vsphere_vmdk_format'] = 'standard'
fixed_params = ['vsphere_ova_format', 'vsphere_vmdk_format']
if format == 'vagrant-hyperv':
format = 'hyperv-ova'
img_opts['hyperv_ova_format'] = 'hyperv-vagrant'
fixed_params = ['hyperv_ova_format']
targ = self._do_target_image(self.base_img.base_image.identifier,
format.replace('-ova', ''), img_opts=img_opts,
fixed_params=fixed_params)
targ2 = self._do_target_image(targ.target_image.identifier, 'OVA',
img_opts=img_opts, fixed_params=fixed_params)
return {'image': targ2.target_image.data}
def _buildDocker(self, format):
"""
Build a base docker image. This image will be tagged with the NVR.A
automatically because we name it that way in the ImageFactory TDL.
@args:
format - the string "docker"
@returns:
a dict with some metadata about the image
"""
img_opts = {'compress': 'xz'}
targ = self._do_target_image(self.base_img.base_image.identifier,
'docker', img_opts=img_opts)
return {'image': targ.target_image.data}
def _do_target_image(self, base_id, image_type, img_opts=None, fixed_params=None):
"""
A generic method for building what ImageFactory calls "target images".
These are images based on a raw disk that was built before using the
_buildBase method.
@args:
base_id - a string ID of the image to build off of
image_type - a string representing the target type. ImageFactory
uses this to figure out what plugin to run
img_opts - a dict of additional options that specific to the target
type we pass in via image_type
fixed_params - a list of parameter keys that should not be
overridden by the --factory-parameter CLI
@returns:
A Builder() object from ImageFactory that contains information
about the image building include state and progress.
"""
# TODO: test the failure case where IF itself throws an exception
# ungracefully (missing a plugin for example)
# may need to still upload ozlog and remove the log handler
if img_opts is None:
img_opts = {}
if fixed_params is None:
fixed_params = []
self.logger.debug('img_opts_pre_merge: %s' % img_opts)
self._mergeFactoryParams(img_opts, fixed_params)
self.logger.debug('img_opts_post_merge: %s' % img_opts)
target = self.bd.builder_for_target_image(image_type,
image_id=base_id,
template=None,
parameters=img_opts)
target.target_thread.join()
self._checkImageState(target)
return target
def _buildConvert(self, format):
"""
Build an image by converting the format using qemu-img. This is method
enables a variety of formats like qcow, qcow2, vmdk, and vdi.
@args:
format - a string representing the image format, "qcow2"
@returns
a dict with some metadata about the image
"""
self.logger.debug('converting an image to "%s"' % format)
ofmt = format
if format == 'vpc':
ofmt = 'vhd'
newimg = os.path.join(self.workdir, self.imgname + '.%s' % ofmt)
cmd = ['/usr/bin/qemu-img', 'convert', '-f', 'raw', '-O',
format, self.base_img.base_image.data, newimg]
if format == 'qcow':
cmd.insert(2, '-c') # enable compression for qcow images
if format == 'qcow2':
# qemu-img changed its default behavior at some point to generate a
# v3 image when the requested output format is qcow2. We don't
# want koji to output different formats based on the version of
# qemu-img that happens to be on the builder. Here we use a function
# inside of Image Factory that detects qemu-img behavior and adds
# the correct options to ensure original "v2" compatibility
cmd = qemu_convert_cmd(self.base_img.base_image.data, newimg, compress=True)
# Factory does not use a full path - for consistency, force that here
cmd[0] = '/usr/bin/qemu-img'
conlog = os.path.join(self.workdir,
'qemu-img-%s-%s.log' % (format, self.arch))
log_output(self.session, cmd[0], cmd, conlog,
self.getUploadDir(), logerror=1)
return {'image': newimg}
def handler(self, name, version, release, arch, target_info,
build_tag, repo_info, inst_tree, opts=None):
if not ozif_enabled:
self.logger.error(
"ImageFactory features require the following dependencies: "
"pykickstart, imagefactory, oz and possibly python-hashlib")
raise koji.ApplianceError('ImageFactory functions not available')
if opts is None:
opts = {}
self.arch = arch
self.target_info = target_info
self.repo_info = repo_info
self.opts = opts
self.formats = self._format_deps(opts.get('format'))
# First, prepare the kickstart to use the repos we tell it
kspath = self.fetchKickstart(build_tag=target_info['build_tag_name'])
ks = self.prepareKickstart(kspath, inst_tree)
kskoji = self.writeKickstart(ks,
os.path.join(self.workdir, 'koji-%s-%i-base.ks' %
(self.target_info['build_tag_name'], self.id)))
# auto-generate a TDL file and config dict for ImageFactory
self.imgname = '%s-%s-%s.%s' % (name, version, release, self.arch)
template = self.makeTemplate(self.imgname, inst_tree)
self.logger.debug('oz template: %s' % template)
config = self.makeConfig()
self.logger.debug('IF config object: %s' % config)
ApplicationConfiguration(configuration=config)
tdl_path = os.path.join(self.workdir, 'tdl-%s.xml' % self.arch)
with koji._open_text_file(tdl_path, 'wt') as tdl:
tdl.write(template)
self.uploadFile(tdl_path)
# ImageFactory picks a port to the guest VM using a rolling integer.
# This is a problem for concurrency, so we override the port it picks
# here using the task ID. (not a perfect solution but good enough:
# the likelihood of image tasks clashing here is very small)
rm = ReservationManager()
rm._listen_port = rm.MIN_PORT + self.id % (rm.MAX_PORT - rm.MIN_PORT)
ozlogname = 'oz-%s.log' % self.arch
self.ozlog = os.path.join(self.workdir, ozlogname)
# invoke the image builds
images = self.do_images(ks, template, inst_tree)
images['raw']['tdl'] = os.path.basename(tdl_path),
# structure the results to pass back to the hub:
imgdata = {
'arch': self.arch,
'task_id': self.id,
'logs': [ozlogname],
'name': name,
'version': version,
'release': release,
'rpmlist': [],
'files': [os.path.basename(tdl_path),
os.path.basename(kspath),
os.path.basename(kskoji)]
}
# record the RPMs that were installed
if not opts.get('scratch'):
# fields = ('name', 'version', 'release', 'arch', 'epoch', 'size',
# 'payloadhash', 'buildtime')
icicle = xml.dom.minidom.parseString(images['raw']['icicle']) # nosec
self.logger.debug('ICICLE: %s' % images['raw']['icicle'])
for p in icicle.getElementsByTagName('extra'):
bits = p.firstChild.nodeValue.split(',')
rpm = {
'name': bits[0],
'version': bits[1],
'release': bits[2],
'arch': bits[3],
# epoch is a special case, as usual
'size': int(bits[5]),
'payloadhash': bits[6],
'buildtime': int(bits[7])
}
if rpm['name'] in ['buildsys-build', 'gpg-pubkey']:
continue
if bits[4] == '(none)':
rpm['epoch'] = None
else:
rpm['epoch'] = int(bits[4])
imgdata['rpmlist'].append(rpm)
# TODO: hack to make this work for now, need to refactor
br = BuildRoot(self.session, self.options, build_tag, self.arch,
self.id, repo_id=self.repo_info['id'])
br.markExternalRPMs(imgdata['rpmlist'])
# upload the results
for format in (f for f in self.formats if self.formats[f]):
newimg = images[format]['image']
if ('ova' in format or format in ('raw-xz', 'liveimg-squashfs', 'tar-gz')):
newname = self.imgname + '.' + format.replace('-', '.')
elif 'vagrant' in format:
# This embeds the vagrant target and the ".box" format in the name
# Previously, based on filename, these looked like OVAs
# This was confusing to many people
newname = self.imgname + '.' + format + '.box'
elif format == 'docker':
newname = self.imgname + '.' + 'tar.xz'
elif format == 'vpc':
newname = self.imgname + '.' + 'vhd'
else:
newname = self.imgname + '.' + format
if format != 'docker':
lxml = images[format]['libvirt']
imgdata['files'].append(os.path.basename(lxml))
self.uploadFile(lxml)
imgdata['files'].append(os.path.basename(newname))
self.uploadFile(newimg, remoteName=newname)
# no need to delete anything since self.workdir will get scrubbed
return imgdata
class BuildIndirectionImageTask(OzImageTask):
Methods = ['indirectionimage']
# So, these are copied directly from the base image class
# Realistically, we want to inherit methods from both BuildImageTask
# and OzImageTask.
# TODO: refactor - my initial suggestion would be to have OzImageTask
# be a child of BuildImageTask
def initImageBuild(self, name, version, release, target_info, opts):
"""create a build object for this image build"""
pkg_cfg = self.session.getPackageConfig(target_info['dest_tag_name'],
name)
self.logger.debug("%r" % pkg_cfg)
if not opts.get('skip_tag') and not opts.get('scratch'):
# Make sure package is on the list for this tag
if pkg_cfg is None:
raise koji.BuildError("package (image) %s not in list for tag %s" %
(name, target_info['dest_tag_name']))
elif pkg_cfg['blocked']:
raise koji.BuildError("package (image) %s is blocked for tag %s" %
(name, target_info['dest_tag_name']))
return self.session.host.initImageBuild(self.id,
dict(name=name, version=version, release=release,
epoch=0))
# END inefficient base image task method copies
def fetchHubOrSCM(self, filepath, fileurl, build_tag):
"""
Retrieve a file either from the hub or a remote scm
If fileurl is None we assume we are being asked to retrieve from
the hub and that filepath is relative to /mnt/koji/work.
if fileurl contains a value we assume a remote SCM.
If retrieving remote we assume that filepath is the file name and
fileurl is the path in the remote SCM where that file can be found.
@returns: absolute path to the retrieved file
"""
# TODO: A small change to the base image build code could allow this method
# to be shared between both tasks. I wanted this initial implementation
# to be entirely self contained. Revisit if anyone feels like a refactor.
self.logger.debug("filepath = %s" % filepath)
if fileurl:
scm = SCM(fileurl, allow_password=self.options.allow_password_in_scm_url)
scm.assert_allowed(allowed=self.options.allowed_scms,
session=self.session,
by_config=self.options.allowed_scms_use_config,
by_policy=self.options.allowed_scms_use_policy,
policy_data={
'user_id': self.taskinfo['owner'],
'channel': self.session.getChannel(self.taskinfo['channel_id'],
strict=True)['name'],
'scratch': self.opts.get('scratch')
})
self.run_callbacks('preSCMCheckout', scminfo=scm.get_info(),
build_tag=build_tag, scratch=self.opts.get('scratch'))
logfile = os.path.join(self.workdir, 'checkout.log')
scmsrcdir = scm.checkout(self.workdir, self.session,
self.getUploadDir(), logfile)
self.run_callbacks("postSCMCheckout",
scminfo=scm.get_info(),
build_tag=build_tag,
scratch=self.opts.get('scratch'),
srcdir=scmsrcdir)
final_path = os.path.join(scmsrcdir, os.path.basename(filepath))
else:
tops = dict([(k, getattr(self.options, k)) for k in ('topurl', 'topdir')])
tops['tempdir'] = self.workdir
final_path = os.path.join(self.workdir, os.path.basename(filepath))
with koji.openRemoteFile(filepath, **tops) as remote_fileobj:
with open(final_path, 'wb') as final_fileobj:
shutil.copyfileobj(remote_fileobj, final_fileobj)
self.logger.debug('uploading retrieved file from here: %s' % final_path)
self.uploadFile(final_path) # upload the original ks file
return final_path # absolute path to the ks file
def handler(self, opts):
"""Governing task for building an image with two other images using Factory Indirection"""
# TODO: Add mode of operation where full build details are given for
# either base or utility or both, then spawn subtasks to do them first
def _task_to_image(task_id):
""" Take a task ID and turn it into an Image Factory Base Image object """
pim = PersistentImageManager.default_manager()
taskinfo = self.session.getTaskInfo(task_id)
taskstate = koji.TASK_STATES[taskinfo['state']].lower()
if taskstate != 'closed':
raise koji.BuildError("Input task (%d) must be in closed state"
" - current state is (%s)" %
(task_id, taskstate))
taskmethod = taskinfo['method']
if taskmethod != "createImage":
raise koji.BuildError("Input task method must be 'createImage'"
" - actual method (%s)" %
(taskmethod))
result = self.session.getTaskResult(task_id)
# This approach works for both scratch and saved/formal images
# The downside is that we depend on the output file naming convention
def _match_name(inlist, namere):
for filename in inlist:
if re.search(namere, filename):
return filename
task_diskimage = _match_name(result['files'], ".*qcow2$")
task_tdl = _match_name(result['files'], "tdl.*xml")
task_dir = os.path.join(koji.pathinfo.work(), koji.pathinfo.taskrelpath(task_id))
diskimage_full = os.path.join(task_dir, task_diskimage)
tdl_full = os.path.join(task_dir, task_tdl)
if not (os.path.isfile(diskimage_full) and os.path.isfile(tdl_full)):
raise koji.BuildError(
"Missing TDL or qcow2 image for task (%d) - possible expired scratch build" %
(task_id))
# The sequence to recreate a valid persistent image is as follows
# Create a new BaseImage object
factory_base_image = BaseImage()
# Add it to the persistence layer
pim.add_image(factory_base_image)
# Now replace the data and template with the files referenced above
# and mark it as a complete image
# Factory doesn't attempt to modify a disk image after it is COMPLETE so
# this will work safely on read-only NFS mounts
factory_base_image.data = diskimage_full
factory_base_image.template = koji._open_text_file(tdl_full).read()
factory_base_image.status = 'COMPLETE'
# Now save it
pim.save_image(factory_base_image)
# We can now reference this object directly or via its UUID in persistent storage
return factory_base_image
def _nvr_to_image(nvr, arch):
"""
Take a build ID or NVR plus arch and turn it into
an Image Factory Base Image object
"""
pim = PersistentImageManager.default_manager()
build = self.session.getBuild(nvr)
if not build:
raise koji.BuildError("Could not find build for (%s)" % (nvr))
buildarchives = self.session.listArchives(build['id'])
if not buildarchives:
raise koji.BuildError("Could not retrieve archives for build (%s) from NVR (%s)" %
(build['id'], nvr))
buildfiles = [x['filename'] for x in buildarchives]
builddir = koji.pathinfo.imagebuild(build)
def _match_name(inlist, namere):
for filename in inlist:
if re.search(namere, filename):
return filename
build_diskimage = _match_name(buildfiles, r".*%s\.qcow2$" % (arch))
build_tdl = _match_name(buildfiles, r"tdl.%s\.xml" % (arch))
diskimage_full = os.path.join(builddir, build_diskimage)
tdl_full = os.path.join(builddir, build_tdl)
if not (os.path.isfile(diskimage_full) and os.path.isfile(tdl_full)):
raise koji.BuildError("Missing TDL (%s) or qcow2 (%s) image for image (%s)"
" - this should never happen" %
(build_tdl, build_diskimage, nvr))
# The sequence to recreate a valid persistent image is as follows
# Create a new BaseImage object
factory_base_image = BaseImage()
# Add it to the persistence layer
pim.add_image(factory_base_image)
# Now replace the data and template with the files referenced above
# and mark it as a complete image
# Factory doesn't attempt to modify a disk image after it is COMPLETE so
# this will work safely on read-only NFS mounts
factory_base_image.data = diskimage_full
factory_base_image.template = koji._open_text_file(tdl_full).read()
factory_base_image.status = 'COMPLETE'
# Now save it
pim.save_image(factory_base_image)
# We can now reference this object directly or via its UUID in persistent storage
return factory_base_image
if opts is None:
opts = {}
self.opts = opts
config = self.makeConfig()
self.logger.debug('IF config object: %s' % config)
ApplicationConfiguration(configuration=config)
ozlogname = 'oz-indirection.log'
ozlog = os.path.join(self.workdir, ozlogname)
# END shared code
fhandler = logging.FileHandler(ozlog)
bd = BuildDispatcher()
tlog = logging.getLogger()
tlog.setLevel(logging.DEBUG)
tlog.addHandler(fhandler)
# TODO: Copy-paste from BaseImage - refactor
target_info = self.session.getBuildTarget(opts['target'], strict=True)
name = opts['name']
version = opts['version']
release = opts['release']
# TODO: Another mostly copy-paste
if '-' in version:
raise koji.ApplianceError('The Version may not have a hyphen')
if release and '-' in release:
raise koji.ApplianceError('The Release may not have a hyphen')
indirection_template = self.fetchHubOrSCM(opts.get('indirection_template'),
opts.get('indirection_template_url'),
target_info['build_tag_name'])
self.logger.debug('Got indirection template %s' % (indirection_template))
try:
if opts['utility_image_build']:
utility_factory_image = _nvr_to_image(opts['utility_image_build'], opts['arch'])
else:
utility_factory_image = _task_to_image(int(opts['utility_image_task']))
if opts['base_image_build']:
base_factory_image = _nvr_to_image(opts['base_image_build'], opts['arch'])
else:
base_factory_image = _task_to_image(int(opts['base_image_task']))
except Exception as e:
self.logger.exception(e)
raise
# OK - We have a template and two input images - lets build
bld_info = None
if not opts['scratch']:
bld_info = self.initImageBuild(name, version, release,
target_info, opts)
try:
return self._do_indirection(opts, base_factory_image, utility_factory_image,
indirection_template, tlog, ozlog, fhandler,
bld_info, target_info, bd)
except Exception:
if not opts.get('scratch'):
# scratch builds do not get imported
if bld_info:
self.session.host.failBuild(self.id, bld_info['id'])
# reraise the exception
raise
def _do_indirection(self, opts, base_factory_image, utility_factory_image,
indirection_template, tlog, ozlog, fhandler, bld_info,
target_info, bd):
# TODO: The next several lines are shared with the handler for other Factory tasks
# refactor in such a way that this can be a helper in OzImageTask
# ImageFactory picks a port to the guest VM using a rolling integer.
# This is a problem for concurrency, so we override the port it picks
# here using the task ID. (not a perfect solution but good enough:
# the likelihood of image tasks clashing here is very small)
rm = ReservationManager()
rm._listen_port = rm.MIN_PORT + self.id % (rm.MAX_PORT - rm.MIN_PORT)
utility_customizations = koji._open_text_file(indirection_template).read()
results_loc = opts.get('results_loc', None)
if results_loc[0] != "/":
results_loc = "/" + results_loc
params = {'utility_image': str(utility_factory_image.identifier),
'utility_customizations': utility_customizations,
'results_location': results_loc}
random.seed() # necessary to ensure a unique mac address
try:
try:
# Embedded deep debug option - if template is just the string MOCK
# skip the actual build and create a mock target image instead
if utility_customizations.strip() == "MOCK":
target = Builder()
target_image = TargetImage()
pim = PersistentImageManager.default_manager()
pim.add_image(target_image)
target.target_image = target_image
with koji._open_text_file(target_image.data, "wt") as f:
f.write("Mock build from task ID: %s" % self.id)
target_image.status = 'COMPLETE'
else:
target = bd.builder_for_target_image('indirection',
image_id=base_factory_image.identifier,
parameters=params)
target.target_thread.join()
except Exception as e:
self.logger.debug("Exception encountered during target build")
self.logger.exception(e)
finally:
# upload log even if we failed to help diagnose an issue
tlog.removeHandler(fhandler)
self.uploadFile(ozlog)
self.logger.debug('Target image results: %s' % target.target_image.status)
if target.target_image.status == 'FAILED':
# TODO abort when a task is CANCELLED
if not self.session.checkUpload('', os.path.basename(ozlog)):
tlog.removeHandler(fhandler)
self.uploadFile(ozlog)
raise koji.ApplianceError('Image status is %s: %s' %
(target.target_image.status,
target.target_image.status_detail))
self.uploadFile(target.target_image.data, remoteName=os.path.basename(results_loc))
myresults = {}
myresults['task_id'] = self.id
myresults['files'] = [os.path.basename(results_loc)]
myresults['logs'] = [os.path.basename(ozlog)]
myresults['arch'] = opts['arch']
# TODO: This should instead track the two input images: base and utility
myresults['rpmlist'] = []
# This is compatible with some helper methods originally implemented for the base
# image build. In the original usage, the dict contains an entry per build arch
# TODO: If adding multiarch support, keep this in mind
results = {str(self.id): myresults}
self.logger.debug('Image Results for hub: %s' % results)
if opts['scratch']:
self.session.host.moveImageBuildToScratch(self.id, results)
else:
self.session.host.completeImageBuild(self.id, bld_info['id'],
results)
# tag it
if not opts.get('scratch') and not opts.get('skip_tag'):
tag_task_id = self.session.host.subtask(method='tagBuild',
arglist=[target_info['dest_tag'],
bld_info['id'], False, None, True],
label='tag', parent=self.id, arch='noarch')
self.wait(tag_task_id)
# report results
report = ''
if opts.get('scratch'):
respath = ', '.join(
[os.path.join(koji.pathinfo.work(),
koji.pathinfo.taskrelpath(tid)) for tid in [self.id]])
report += 'Scratch '
else:
respath = koji.pathinfo.imagebuild(bld_info)
report += 'image build results in: %s' % respath
return report
class RebuildSRPM(BaseBuildTask):
Methods = ['rebuildSRPM']
_taskWeight = 1.0
def checkHost(self, hostdata):
tag = self.params[1]
return self.checkHostArch(tag, hostdata)
def handler(self, srpm, build_tag, opts=None):
if opts is None:
opts = {}
repo_id = opts.get('repo_id')
if not repo_id:
raise koji.BuildError("A repo id must be provided")
repo_info = self.session.repoInfo(repo_id, strict=True)
event_id = repo_info['create_event']
build_tag = self.session.getTag(build_tag, strict=True, event=event_id)
rootopts = {'install_group': 'srpm-build', 'repo_id': repo_id}
br_arch = self.find_arch('noarch', self.session.host.getHost(
), self.session.getBuildConfig(build_tag['id'], event=event_id))
broot = BuildRoot(self.session, self.options,
build_tag['id'], br_arch, self.id, **rootopts)
broot.workdir = self.workdir
self.logger.debug("Initializing buildroot")
broot.init()
# Setup files and directories for SRPM rebuild
# We can't put this under the mock homedir because that directory
# is completely blown away and recreated on every mock invocation
srpmdir = broot.tmpdir() + '/srpm'
koji.ensuredir(srpmdir)
uploadpath = self.getUploadDir()
fn = self.localPath("work/%s" % srpm)
if not os.path.exists(fn):
raise koji.BuildError("Input SRPM file missing: %s" % fn)
shutil.copy(fn, srpmdir)
# rebuild srpm
self.logger.debug("Running srpm rebuild")
br_srpm_path = os.path.join(broot.path_without_to_within(srpmdir), os.path.basename(srpm))
broot.rebuild_srpm(br_srpm_path)
srpms = glob.glob('%s/*.src.rpm' % broot.resultdir())
if len(srpms) == 0:
raise koji.BuildError("No srpms found in %s" % srpmdir)
elif len(srpms) > 1:
raise koji.BuildError("Multiple srpms found in %s: %s" % (srpmdir, ", ".join(srpms)))
else:
srpm = srpms[0]
# check srpm name
h = koji.get_rpm_header(srpm)
name = koji.get_header_field(h, 'name')
version = koji.get_header_field(h, 'version')
release = koji.get_header_field(h, 'release')
srpm_name = "%(name)s-%(version)s-%(release)s.src.rpm" % locals()
if srpm_name != os.path.basename(srpm):
raise koji.BuildError('srpm name mismatch: %s != %s' %
(srpm_name, os.path.basename(srpm)))
# upload srpm and return
self.uploadFile(srpm)
brootid = broot.id
log_files = list(broot.logs)
broot.expire()
return {
'srpm': "%s/%s" % (uploadpath, srpm_name),
'logs': ["%s/%s" % (uploadpath, f) for f in log_files],
'brootid': brootid,
'source': {
'source': os.path.basename(srpm),
'url': os.path.basename(srpm),
}
}
class BuildSRPMFromSCMTask(BaseBuildTask):
Methods = ['buildSRPMFromSCM']
_taskWeight = 1.0
def spec_sanity_checks(self, filename):
spec = koji._open_text_file(filename).read()
for tag in ("Packager", "Distribution", "Vendor"):
if re.match("%s:" % tag, spec, re.M):
raise koji.BuildError("%s is not allowed to be set in spec file" % tag)
for tag in ("packager", "distribution", "vendor"):
if re.match(r"%%define\s+%s\s+" % tag, spec, re.M):
raise koji.BuildError("%s is not allowed to be defined in spec file" % tag)
def patch_scm_source(self, sourcedir, logfile, opts):
# override if desired
pass
def checkHost(self, hostdata):
tag = self.params[1]
return self.checkHostArch(tag, hostdata)
def handler(self, url, build_tag, opts=None):
if opts is None:
opts = {}
# will throw a BuildError if the url is invalid
scm = SCM(url, allow_password=self.options.allow_password_in_scm_url)
scm.assert_allowed(allowed=self.options.allowed_scms,
session=self.session,
by_config=self.options.allowed_scms_use_config,
by_policy=self.options.allowed_scms_use_policy,
policy_data={
'user_id': self.taskinfo['owner'],
'channel': self.session.getChannel(self.taskinfo['channel_id'],
strict=True)['name'],
'scratch': opts.get('scratch')
})
repo_id = opts.get('repo_id')
if not repo_id:
raise koji.BuildError("A repo id must be provided")
repo_info = self.session.repoInfo(repo_id, strict=True)
event_id = repo_info['create_event']
build_tag = self.session.getTag(build_tag, strict=True, event=event_id)
# need DNS in the chroot because "make srpm" may need to contact
# a SCM or lookaside cache to retrieve the srpm contents
rootopts = {'install_group': 'srpm-build',
'setup_dns': True,
'repo_id': repo_id}
if self.options.scm_credentials_dir is not None and os.path.isdir(
self.options.scm_credentials_dir):
rootopts['bind_opts'] = {'dirs': {self.options.scm_credentials_dir: '/credentials', }}
# Force internal_dev_setup back to true because bind_opts is used to turn it off
rootopts['internal_dev_setup'] = True
br_arch = self.find_arch('noarch', self.session.host.getHost(
), self.session.getBuildConfig(build_tag['id'], event=event_id))
broot = BuildRoot(self.session, self.options,
build_tag['id'], br_arch, self.id, **rootopts)
broot.workdir = self.workdir
self.logger.debug("Initializing buildroot")
broot.init()
# Setup files and directories for SRPM creation
# We can't put this under the mock homedir because that directory
# is completely blown away and recreated on every mock invocation
scmdir = broot.tmpdir() + '/scmroot'
koji.ensuredir(scmdir)
logfile = self.workdir + '/checkout.log'
uploadpath = self.getUploadDir()
self.run_callbacks('preSCMCheckout', scminfo=scm.get_info(),
build_tag=build_tag, scratch=opts.get('scratch'),
buildroot=broot)
# Check out spec file, etc. from SCM
sourcedir = scm.checkout(scmdir, self.session, uploadpath, logfile)
self.run_callbacks("postSCMCheckout",
scminfo=scm.get_info(),
build_tag=build_tag,
scratch=opts.get('scratch'),
srcdir=sourcedir,
buildroot=broot)
# get the source before chown, git > 2.35.2 would refuse to that later
source = scm.get_source()
# chown the sourcedir and everything under it to the mockuser
# so we can build the srpm as non-root
uid = pwd.getpwnam(self.options.mockuser)[2]
# rpmbuild seems to complain if it's running in the "mock" group but
# files are in a different group
gid = grp.getgrnam('mock')[2]
self.chownTree(scmdir, uid, gid)
# Hook for patching spec file in place
self.patch_scm_source(sourcedir, logfile, opts)
# Find and verify that there is only one spec file.
spec_files = glob.glob("%s/*.spec" % sourcedir)
if not spec_files and self.options.support_rpm_source_layout:
# also check SPECS dir
spec_files = glob.glob("%s/SPECS/*.spec" % sourcedir)
if len(spec_files) == 0:
raise koji.BuildError("No spec file found")
elif len(spec_files) > 1:
# If there are multiple spec files, check whether one of them
# matches the SCM repo name
scm_spec_options = (
"%s/%s.spec" % (sourcedir, os.path.basename(sourcedir)),
"%s/SPECS/%s.spec" % (sourcedir, os.path.basename(sourcedir)),
)
spec_file = None
for scm_spec in scm_spec_options:
if scm_spec in spec_files:
# We have a match, so use this one.
spec_file = scm_spec
break
if not spec_file:
# We didn't find an exact match, so throw an error
raise koji.BuildError("Multiple spec files found but none is matching "
"SCM checkout dir name: %s" % spec_files)
else:
spec_file = spec_files[0]
# Run spec file sanity checks. Any failures will throw a BuildError
self.spec_sanity_checks(spec_file)
# build srpm
self.logger.debug("Running srpm build")
broot.build_srpm(spec_file, sourcedir, scm.source_cmd)
srpms = glob.glob('%s/*.src.rpm' % broot.resultdir())
if len(srpms) == 0:
raise koji.BuildError("No srpms found in %s" % sourcedir)
elif len(srpms) > 1:
raise koji.BuildError("Multiple srpms found in %s: %s" % (sourcedir, ", ".join(srpms)))
else:
srpm = srpms[0]
# check srpm name
h = koji.get_rpm_header(srpm)
name = koji.get_header_field(h, 'name')
version = koji.get_header_field(h, 'version')
release = koji.get_header_field(h, 'release')
srpm_name = "%(name)s-%(version)s-%(release)s.src.rpm" % locals()
if srpm_name != os.path.basename(srpm):
raise koji.BuildError('srpm name mismatch: %s != %s' %
(srpm_name, os.path.basename(srpm)))
# upload srpm and return
self.uploadFile(srpm)
brootid = broot.id
log_files = list(broot.logs)
broot.expire()
return {'srpm': "%s/%s" % (uploadpath, srpm_name),
'logs': ["%s/%s" % (uploadpath, f) for f in log_files],
'brootid': brootid,
'source': source,
}
class TagNotificationTask(BaseTaskHandler):
Methods = ['tagNotification']
_taskWeight = 0.1
message_templ = \
"""From: %(from_addr)s\r
Subject: %(nvr)s %(result)s %(operation)s by %(user_name)s\r
To: %(to_addrs)s\r
X-Koji-Package: %(pkg_name)s\r
X-Koji-NVR: %(nvr)s\r
X-Koji-Draft: %(draft)s\r
X-Koji-User: %(user_name)s\r
X-Koji-Status: %(status)s\r
%(tag_headers)s\r
\r
Package: %(pkg_name)s\r
NVR: %(nvr)s\r
User: %(user_name)s\r
Status: %(status)s\r
%(operation_details)s\r
%(nvr)s %(result)s %(operation)s by %(user_name)s\r
%(failure_info)s\r
"""
def handler(self, recipients, is_successful, tag_info, from_info,
build_info, user_info, ignore_success=None, failure_msg=''):
if len(recipients) == 0:
self.logger.debug('task %i: no recipients, not sending notifications', self.id)
return
if ignore_success and is_successful:
self.logger.debug(
'task %i: tag operation successful and ignore success is true, '
'not sending notifications', self.id)
return
build = self.session.getBuild(build_info)
user = self.session.getUser(user_info)
pkg_name = build['package_name']
nvr = koji.buildLabel(build)
draft = build.get('draft', False)
user_name = user['name']
from_addr = self.options.from_addr
to_addrs = ', '.join(recipients)
operation = '%(action)s'
operation_details = 'Tag Operation: %(action)s\r\n'
tag_headers = ''
if from_info:
from_tag = self.session.getTag(from_info)
from_tag_name = from_tag['name']
operation += ' from %s' % from_tag_name
operation_details += 'From Tag: %s\r\n' % from_tag_name
tag_headers += 'X-Koji-Tag: %s' % from_tag_name
action = 'untagged'
if tag_info:
tag = self.session.getTag(tag_info)
tag_name = tag['name']
operation += ' into %s' % tag_name
operation_details += 'Into Tag: %s\r\n' % tag_name
if tag_headers:
tag_headers += '\r\n'
tag_headers += 'X-Koji-Tag: %s' % tag_name
action = 'tagged'
if tag_info and from_info:
action = 'moved'
operation = operation % locals()
operation_details = operation_details % locals()
if is_successful:
result = 'successfully'
status = 'complete'
failure_info = ''
else:
result = 'unsuccessfully'
status = 'failed'
failure_info = "Operation failed with the error:\r\n %s\r\n" % failure_msg
message = self.message_templ % locals()
# ensure message is in UTF-8
message = koji.fixEncoding(message)
# binary for python3
if six.PY3:
message = message.encode('utf8')
server = smtplib.SMTP(self.options.smtphost)
if self.options.smtp_user is not None and self.options.smtp_pass is not None:
server.login(self.options.smtp_user, self.options.smtp_pass)
# server.set_debuglevel(True)
server.sendmail(from_addr, recipients, message)
server.quit()
return 'sent notification of tag operation %i to: %s' % (self.id, to_addrs)
class BuildNotificationTask(BaseTaskHandler):
Methods = ['buildNotification']
_taskWeight = 0.1
# XXX externalize these templates somewhere
subject_templ = "Package: %(build_nvr)s Tag: %(dest_tag)s Status: %(status)s " \
"Built by: %(build_owner)s"
message_templ = \
"""From: %(from_addr)s\r
Subject: %(subject)s\r
To: %(to_addrs)s\r
X-Koji-Tag: %(dest_tag)s\r
X-Koji-Package: %(build_pkg_name)s\r
X-Koji-Builder: %(build_owner)s\r
X-Koji-Status: %(status)s\r
X-Koji-Draft: %(draft)s\r
\r
Package: %(build_nvr)s\r
Tag: %(dest_tag)s\r
Status: %(status)s%(cancel_info)s\r
Built by: %(build_owner)s\r
ID: %(build_id)i\r
Started: %(creation_time)s\r
Finished: %(completion_time)s\r
%(changelog)s\r
%(failure)s\r
%(output)s\r
Task Info: %(weburl)s/taskinfo?taskID=%(task_id)i\r
Build Info: %(weburl)s/buildinfo?buildID=%(build_id)i\r
"""
def _getTaskData(self, task_id, data=None):
if not data:
data = {}
taskinfo = self.session.getTaskInfo(task_id)
if not taskinfo:
# invalid task_id
return data
if taskinfo['host_id']:
hostinfo = self.session.getHost(taskinfo['host_id'])
else:
hostinfo = None
result = None
try:
result = self.session.getTaskResult(task_id)
except Exception:
excClass, result = sys.exc_info()[:2]
if hasattr(result, 'faultString'):
result = result.faultString
else:
result = '%s: %s' % (excClass.__name__, result)
result = result.strip()
# clear the exception, since we're just using
# it for display purposes
try:
sys.exc_clear()
except AttributeError:
# sys.exc_clear() is obsolete in Python 3
pass
if not result:
result = 'Unknown'
logs, rpms, srpms, misc = [], [], [], []
files_data = self.session.listTaskOutput(task_id, all_volumes=True)
for filename in files_data:
if filename.endswith('.log'):
logs += [(filename, volume) for volume in files_data[filename]]
# all rpms + srpms are expected to be in builddir
elif filename.endswith('.src.rpm'):
srpms.append(filename)
elif filename.endswith('.rpm'):
rpms.append(filename)
else:
misc += [(filename, volume) for volume in files_data[filename]]
# sort by volumes and filenames
logs.sort(key=lambda x: (x[1], x[0]))
misc.sort(key=lambda x: (x[1], x[0]))
rpms.sort()
data[task_id] = {}
data[task_id]['id'] = taskinfo['id']
data[task_id]['method'] = taskinfo['method']
data[task_id]['arch'] = taskinfo['arch']
data[task_id]['build_arch'] = taskinfo['label'] or ''
data[task_id]['host'] = hostinfo and hostinfo['name'] or None
data[task_id]['state'] = koji.TASK_STATES[taskinfo['state']].lower()
data[task_id]['result'] = result
data[task_id]['request'] = self.session.getTaskRequest(task_id)
data[task_id]['logs'] = logs
data[task_id]['rpms'] = rpms
data[task_id]['srpms'] = srpms
data[task_id]['misc'] = misc
children = self.session.getTaskChildren(task_id)
for child in children:
data = self._getTaskData(child['id'], data)
return data
def handler(self, recipients, build, target, weburl):
if len(recipients) == 0:
self.logger.debug('task %i: no recipients, not sending notifications', self.id)
return
build_pkg_name = build['package_name']
build_pkg_evr = '%s%s-%s' % \
((build['epoch'] and str(build['epoch']) + ':' or ''),
build['version'],
build['release'])
build_nvr = koji.buildLabel(build)
build_id = build['id']
build_owner = build['owner_name']
draft = build.get('draft', False)
# target comes from session.py:_get_build_target()
dest_tag = None
if target is not None:
dest_tag = target['dest_tag_name']
status = koji.BUILD_STATES[build['state']].lower()
creation_time = koji.formatTimeLong(build['creation_ts'])
completion_time = koji.formatTimeLong(build['completion_ts'])
task_id = build['task_id']
task_data = self._getTaskData(task_id)
cancel_info = ''
failure_info = ''
if build['state'] == koji.BUILD_STATES['CANCELED']:
# The owner of the buildNotification task is the one
# who canceled the task, it turns out.
this_task = self.session.getTaskInfo(self.id)
if this_task['owner']:
canceler = self.session.getUser(this_task['owner'])
cancel_info = "\r\nCanceled by: %s" % canceler['name']
elif build['state'] == koji.BUILD_STATES['FAILED']:
failure_data = task_data[task_id]['result']
failed_hosts = ['%s (%s)' % (task['host'], task['arch'])
for task in task_data.values()
if task['host'] and task['state'] == 'failed']
failure_info = "\r\n%s (%d) failed on %s:\r\n %s" % (build_nvr, build_id,
', '.join(failed_hosts),
failure_data)
failure = failure_info or cancel_info or ''
tasks = {'failed': [task for task in task_data.values() if task['state'] == 'failed'],
'canceled': [task for task in task_data.values() if task['state'] == 'canceled'],
'closed': [task for task in task_data.values() if task['state'] == 'closed']}
srpms = []
for taskinfo in task_data.values():
for srpmfile in taskinfo['srpms']:
srpms.append(srpmfile)
srpms = sorted(self.uniq(srpms))
if srpms:
output = "SRPMS:\r\n"
for srpm in srpms:
output += " %s" % srpm
output += "\r\n\r\n"
else:
output = ''
pathinfo = koji.PathInfo(topdir=self.options.topurl)
buildurl = pathinfo.build(build)
# list states here to make them go in the correct order
for task_state in ['failed', 'canceled', 'closed']:
if tasks[task_state]:
output += "%s tasks:\r\n" % task_state.capitalize()
output += "%s-------\r\n\r\n" % ("-" * len(task_state))
for task in tasks[task_state]:
output += "Task %s" % task['id']
if task['host']:
output += " on %s\r\n" % task['host']
else:
output += "\r\n"
output += "Task Type: %s\r\n" % koji.taskLabel(task)
if task['logs']:
output += "logs:\r\n"
for (file_, volume) in task['logs']:
if tasks[task_state] != 'closed':
output += " %s/getfile?taskID=%s&name=%s&volume=%s\r\n" % (
weburl, task['id'], file_, volume)
else:
output += " %s\r\n" % '/'.join([buildurl, 'data', 'logs',
task['build_arch'], file_])
if task['rpms']:
output += "rpms:\r\n"
for file_ in task['rpms']:
output += " %s\r\n" % '/'.join([buildurl, task['build_arch'], file_])
if task['misc']:
output += "misc:\r\n"
for (file_, volume) in task['misc']:
output += " %s/getfile?taskID=%s&name=%s&volume=%s\r\n" % (
weburl, task['id'], file_, volume)
output += "\r\n"
output += "\r\n"
changelog = koji.util.formatChangelog(self.session.getChangelogEntries(
build_id, queryOpts={'limit': 3})).replace("\n", "\r\n")
if changelog:
changelog = "Changelog:\r\n%s" % changelog
from_addr = self.options.from_addr
to_addrs = ', '.join(recipients)
subject = self.subject_templ % locals()
message = self.message_templ % locals()
# ensure message is in UTF-8
message = koji.fixEncoding(message)
# binary for python3
if six.PY3:
message = message.encode('utf8')
server = smtplib.SMTP(self.options.smtphost)
if self.options.smtp_user is not None and self.options.smtp_pass is not None:
server.login(self.options.smtp_user, self.options.smtp_pass)
# server.set_debuglevel(True)
server.sendmail(from_addr, recipients, message)
server.quit()
return 'sent notification of build %i to: %s' % (build_id, to_addrs)
def uniq(self, items):
"""Remove duplicates from the list of items, and sort the list."""
m = dict(zip(items, [1] * len(items)))
s = sorted(to_list(m.keys()))
return s
class NewRepoTask(BaseTaskHandler):
Methods = ['newRepo']
_taskWeight = 0.1
def copy_arch_repo(self, src_repo_id, src_repo_path, repo_id, arch):
"""Copy repodata, return False if it fails"""
dst_repodata = joinpath(self.workdir, arch, 'repodata')
src_repodata = joinpath(src_repo_path, arch, 'repodata')
try:
# copy repodata
self.logger.debug('Copying repodata %s to %s' % (src_repodata, dst_repodata))
if os.path.exists(src_repodata):
# symlink=True is not needed as they are no part of arch repodir
shutil.copytree(src_repodata, dst_repodata)
uploadpath = self.getUploadDir()
files = []
for f in os.listdir(dst_repodata):
files.append(f)
self.session.uploadWrapper('%s/%s' % (dst_repodata, f), uploadpath, f)
return [uploadpath, files]
except Exception as ex:
self.logger.warning("Copying repo %i to %i failed. %r" % (src_repo_id, repo_id, ex))
# Try to remove potential leftovers and fail if there is some problem
koji.util.rmtree(dst_repodata, self.logger)
return False
def check_repo(self, src_repo_path, dst_repo_path, src_repo, dst_repo, opts):
"""Check if oldrepo is reusable as is and can be directly copied"""
# with_src, debuginfo, pkglist, blocklist, grouplist
# We're ignoring maven support here. It is handled in repo_init which is called
# always, so it doesn't affect efficiency of pre-cloning rpm repos.
if not src_repo_path:
self.logger.debug("Source repo wasn't found")
return False
if not os.path.isdir(src_repo_path):
self.logger.debug("Source repo doesn't exist %s" % src_repo_path)
return False
try:
repo_json = koji.load_json(joinpath(src_repo_path, 'repo.json'))
for key in ('with_debuginfo', 'with_src', 'with_separate_src'):
if repo_json.get(key, False) != opts.get(key, False):
return False
except IOError:
self.logger.debug("Can't open repo.json for repo {repo_info['id']}")
return False
# compare comps if they exist
src_comps_path = joinpath(src_repo_path, 'groups/comps.xml')
dst_comps_path = joinpath(dst_repo_path, 'groups/comps.xml')
src_exists = os.path.exists(src_comps_path)
if src_exists != os.path.exists(dst_comps_path):
self.logger.debug("Comps exists only in one repo")
return False
if src_exists and not filecmp.cmp(src_comps_path, dst_comps_path, shallow=False):
self.logger.debug("Comps differs")
return False
# if there is any external repo, don't trust the repodata
if self.session.getExternalRepoList(src_repo['tag_id'], event=src_repo['create_event']):
self.logger.debug("Source repo use external repos")
return False
if self.session.getExternalRepoList(dst_repo['tag_id'], event=dst_repo['create_event']):
self.logger.debug("Destination repo use external repos")
return False
self.logger.debug('Repo test passed')
return True
def check_arch_repo(self, src_repo_path, dst_repo_path, arch):
"""More checks based on architecture content"""
for fname in ('blocklist', 'pkglist'):
src_file = joinpath(src_repo_path, arch, fname)
dst_file = joinpath(dst_repo_path, arch, fname)
# both must non/exist
if not os.path.exists(src_file) or not os.path.exists(dst_file):
self.logger.debug("%s doesn't exit in one of the repos" % fname)
return False
# content must be same
if not filecmp.cmp(src_file, dst_file, shallow=False):
self.logger.debug('%s differs' % fname)
return False
self.logger.debug('Arch repo test passed %s' % arch)
return True
def handler(self, tag, event=None, src=None, debuginfo=None, separate_src=None, opts=None):
tinfo = self.session.getTag(tag, strict=True, event=event)
# handle deprecated opts
_opts = {}
if src is not None:
_opts['src'] = bool(src)
if debuginfo is not None:
_opts['debuginfo'] = bool(debuginfo)
if separate_src is not None:
_opts['separate_src'] = bool(separate_src)
if _opts:
if opts is not None:
raise koji.ParameterError('opts parameter cannot be combined with legacy options')
self.logger.warning('The src, debuginfo, and separate_src parameters for newRepo '
'tasks are deprecated. Use the opts parameter.')
opts = _opts
# check for fs access before we try calling repoInit
top_repos_dir = joinpath(self.options.topdir, "repos")
if not os.path.isdir(top_repos_dir):
# missing or incorrect mount?
# refuse and let another host try
raise RefuseTask("No access to repos dir %s" % top_repos_dir)
# call repoInit
kwargs = {'opts': opts, 'task_id': self.id}
if event is not None:
kwargs['event'] = event
repo_id, event_id = self.session.host.repoInit(tinfo['id'], **kwargs)
path = koji.pathinfo.repo(repo_id, tinfo['name'])
if not os.path.isdir(path):
raise koji.GenericError("Repo directory missing: %s" % path)
arches = []
for fn in os.listdir(path):
if fn != 'groups' and os.path.isfile("%s/%s/pkglist" % (path, fn)):
arches.append(fn)
# see if we can find a previous repo to update from
# only shadowbuild tags should start with SHADOWBUILD, their repos are auto
# expired. so lets get the most recent expired tag for newRepo shadowbuild tasks.
if tinfo['name'].startswith('SHADOWBUILD'):
oldrepo_state = koji.REPO_EXPIRED
else:
oldrepo_state = koji.REPO_READY
oldrepo = self.session.getRepo(tinfo['id'], state=oldrepo_state)
oldrepo_path = None
if oldrepo:
oldrepo_path = koji.pathinfo.repo(oldrepo['id'], tinfo['name'])
oldrepo['tag_id'] = tinfo['id']
# If there is no old repo, try to find first usable repo in
# inheritance chain and use it as a source. oldrepo is not used if
# createrepo_update is not set, so don't waste call in such case.
if not oldrepo and self.options.createrepo_update:
tags = self.session.getFullInheritance(tinfo['id'])
# we care about best candidate which should be (not necessarily)
# something on higher levels. Sort tags according to depth.
for tag in sorted(tags, key=lambda x: x['currdepth']):
oldrepo = self.session.getRepo(tag['parent_id'], state=oldrepo_state)
if oldrepo:
parenttag = self.session.getTag(tag['parent_id'])
oldrepo_path = koji.pathinfo.repo(oldrepo['id'], parenttag['name'])
oldrepo['tag_id'] = parenttag['id']
break
newrepo_path = koji.pathinfo.repo(repo_id, tinfo['name'])
newrepo = {'tag_id': tinfo['id'], 'create_event': event_id}
if self.options.copy_old_repodata:
possibly_clonable = self.check_repo(oldrepo_path, newrepo_path,
oldrepo, newrepo, kwargs)
else:
possibly_clonable = False
subtasks = {}
data = {}
cloned_archs = []
for arch in arches:
if possibly_clonable and self.check_arch_repo(oldrepo_path, newrepo_path, arch):
result = self.copy_arch_repo(oldrepo['id'], oldrepo_path, repo_id, arch)
if result:
data[arch] = result
cloned_archs.append(arch)
continue
# if we can't copy old repo directly, trigger normal createrepo
arglist = [repo_id, arch, oldrepo]
subtasks[arch] = self.session.host.subtask(method='createrepo',
arglist=arglist,
label=arch,
parent=self.id,
arch='noarch')
# gather subtask results
if subtasks:
results = self.wait(to_list(subtasks.values()), all=True, failany=True)
for (arch, task_id) in six.iteritems(subtasks):
data[arch] = results[task_id]
# finalize
kwargs = {}
if cloned_archs:
kwargs['repo_json_updates'] = {
'cloned_from_repo_id': oldrepo['id'],
'cloned_archs': cloned_archs,
}
self.session.host.repoDone(repo_id, data, **kwargs)
return repo_id, event_id
class CreaterepoTask(BaseTaskHandler):
Methods = ['createrepo']
_taskWeight = 1.5
def handler(self, repo_id, arch, oldrepo):
# arch is the arch of the repo, not the task
rinfo = self.session.repoInfo(repo_id, strict=True)
if rinfo['state'] != koji.REPO_INIT:
raise koji.GenericError("Repo %(id)s not in INIT state (got %(state)s)" % rinfo)
self.repo_id = rinfo['id']
self.pathinfo = koji.PathInfo(self.options.topdir)
toprepodir = self.pathinfo.repo(repo_id, rinfo['tag_name'])
self.repodir = '%s/%s' % (toprepodir, arch)
if not os.path.isdir(self.repodir):
top_repos_dir = joinpath(self.options.topdir, "repos")
if not os.path.isdir(top_repos_dir):
# missing or incorrect mount?
# refuse and let another host try
raise RefuseTask("No access to repos dir %s" % top_repos_dir)
else:
# we seem to have fs access, but dir is missing, perhaps a repo_init bug?
raise koji.GenericError("Repo directory missing: %s" % self.repodir)
groupdata = os.path.join(toprepodir, 'groups', 'comps.xml')
# set up our output dir
self.outdir = '%s/repo' % self.workdir
self.datadir = '%s/repodata' % self.outdir
pkglist = os.path.join(self.repodir, 'pkglist')
if os.path.getsize(pkglist) == 0:
pkglist = None
self.create_local_repo(rinfo, arch, pkglist, groupdata, oldrepo)
external_repos = self.session.getExternalRepoList(
rinfo['tag_id'], event=rinfo['create_event'])
if external_repos:
self.merge_repos(external_repos, arch, groupdata)
elif pkglist is None:
with open(os.path.join(self.datadir, "EMPTY_REPO"), 'wt') as fo:
fo.write("This repo is empty because its tag has no content for this arch\n")
tag = self.session.getTag(rinfo['tag_id'], event=rinfo['create_event'],
strict=True)['name']
self.run_callbacks('postCreateRepo', tag=tag, repodir=self.outdir,
repo_id=self.repo_id, arch=arch)
uploadpath = self.getUploadDir()
files = []
for f in os.listdir(self.datadir):
files.append(f)
self.session.uploadWrapper('%s/%s' % (self.datadir, f), uploadpath, f)
return [uploadpath, files]
def create_local_repo(self, rinfo, arch, pkglist, groupdata, oldrepo):
koji.ensuredir(self.outdir)
if self.options.use_createrepo_c:
cmd = ['/usr/bin/createrepo_c', '--error-exit-val']
else:
cmd = ['/usr/bin/createrepo']
cmd.extend(['-vd', '-o', self.outdir])
if pkglist is not None:
cmd.extend(['-i', pkglist])
if os.path.isfile(groupdata):
cmd.extend(['-g', groupdata])
# attempt to recycle repodata from last repo
if pkglist and oldrepo and self.options.createrepo_update:
# old repo could be from inherited tag, so path needs to be
# composed from that tag, not rinfo['tag_name']
oldrepo = self.session.repoInfo(oldrepo['id'], strict=True)
oldpath = self.pathinfo.repo(oldrepo['id'], oldrepo['tag_name'])
olddatadir = '%s/%s/repodata' % (oldpath, arch)
if not os.path.isdir(olddatadir):
self.logger.warning("old repodata is missing: %s" % olddatadir)
else:
shutil.copytree(olddatadir, self.datadir)
oldorigins = os.path.join(self.datadir, 'pkgorigins.gz')
if os.path.isfile(oldorigins):
# remove any previous origins file and rely on mergerepos
# to rewrite it (if we have external repos to merge)
os.unlink(oldorigins)
cmd.append('--update')
if self.options.createrepo_skip_stat:
cmd.append('--skip-stat')
# note: we can't easily use a cachedir because we do not have write
# permission. The good news is that with --update we won't need to
# be scanning many rpms.
if pkglist is None:
cmd.append(self.outdir)
else:
cmd.append(self.repodir)
logfile = '%s/createrepo.log' % self.workdir
status = log_output(self.session, cmd[0], cmd, logfile, self.getUploadDir(), logerror=True)
if not isSuccess(status):
raise koji.GenericError('failed to create repo: %s'
% parseStatus(status, format_shell_cmd(cmd)))
def _get_mergerepo_c_version(self):
cmd = ['/usr/bin/mergerepo_c', '--version']
try:
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
out, _ = proc.communicate()
status = proc.wait()
if status != 0:
self.logger.warning("Unable to detect mergerepo_c version")
return None
except Exception:
self.logger.warning("Unable to detect mergerepo_c version")
return None
out = out.decode().strip()
# Expects output like: "Version: 0.15.11 (Features: DeltaRPM LegacyWeakdeps )"
m = re.match(r'Version: (\d+).(\d+).(\d+).*', out)
if not m:
self.logger.warning("Unable to parse mergerepo_c version")
return None
version = m.groups()
version = [int(x) for x in version]
return tuple(version)
def merge_repos(self, external_repos, arch, groupdata):
# group repos by merge type
repos_by_mode = {}
for repo in external_repos:
repos_by_mode.setdefault(
repo.get('merge_mode', 'koji'), []).append(repo)
# figure out merge mode
if len(repos_by_mode) > 1:
# TODO: eventually support mixing merge modes
raise koji.GenericError('Found multiple merge modes for external '
'repos: %s\n' % repos_by_mode.keys())
merge_mode = to_list(repos_by_mode.keys())[0]
# move current repo to the premerge location
localdir = '%s/repo_%s_premerge' % (self.workdir, self.repo_id)
os.rename(self.outdir, localdir)
koji.ensuredir(self.outdir)
# generate repo url list, starting with our local premerge repo
repos = ['file://' + localdir + '/']
for repo in external_repos:
if repo.get('arches') and arch not in repo['arches'].split():
# ignore external repo with non-relevant archlist
continue
ext_url = repo['url']
# substitute $arch in the url with the arch of the repo we're generating
ext_url = ext_url.replace('$arch', arch)
repos.append(ext_url)
mergerepo_c_version = None
if self.options.use_createrepo_c or six.PY3:
mergerepo_c_version = self._get_mergerepo_c_version()
# construct command
if merge_mode == 'simple':
if mergerepo_c_version and mergerepo_c_version >= (0, 13, 0):
cmd = ['/usr/bin/mergerepo_c', '--koji', '--simple']
elif six.PY3:
# koji's mergerepos script only works on python2
raise koji.GenericError("mergerepo_c is not installed or has low version: "
"%s (0.13.0 needed for --simple)" %
".".join([str(d) for d in mergerepo_c_version or [None]]))
else:
cmd = ['/usr/libexec/kojid/mergerepos',
'--mode', 'simple',
'--tempdir', self.workdir]
elif merge_mode == 'bare':
# "bare" merge mode for repos with modular metadata
# forces use of mergerepo_c
cmd = ['/usr/bin/mergerepo_c', '--pkgorigins', '--all']
elif self.options.use_createrepo_c or six.PY3:
cmd = ['/usr/bin/mergerepo_c', '--koji']
else:
cmd = ['/usr/libexec/kojid/mergerepos', '--tempdir', self.workdir]
if merge_mode != 'bare':
blocklist = self.repodir + '/blocklist'
cmd.extend(['-b', blocklist])
cmd.extend(['-a', arch, '-o', self.outdir])
if cmd[0].endswith('mergerepo_c') and mergerepo_c_version \
and mergerepo_c_version >= (0, 15, 11):
cmd.append('--arch-expand')
if os.path.isfile(groupdata):
cmd.extend(['-g', groupdata])
for repo in repos:
cmd.extend(['-r', repo])
# run command
logfile = '%s/mergerepos.log' % self.workdir
env = {'TMPDIR': self.workdir}
status = log_output(self.session, cmd[0], cmd, logfile, self.getUploadDir(),
logerror=True, env=env)
if not isSuccess(status):
raise koji.GenericError('failed to merge repos: %s'
% parseStatus(status, format_shell_cmd(cmd)))
class NewDistRepoTask(BaseTaskHandler):
Methods = ['distRepo']
_taskWeight = 0.1
def handler(self, tag, repo_id, keys, task_opts):
tinfo = self.session.getTag(tag, strict=True, event=task_opts['event'])
if len(task_opts['arch']) == 0:
arches = tinfo['arches'] or ''
task_opts['arch'] = arches.split()
if len(task_opts['arch']) == 0:
raise koji.GenericError('No arches specified nor for the tag!')
subtasks = {}
# weed out subarchitectures
canonArches = set()
for arch in task_opts['arch']:
canonArches.add(koji.canonArch(arch))
arch32s = set()
for arch in canonArches:
if not koji.arch.isMultiLibArch(arch):
arch32s.add(arch)
for arch in arch32s:
# we do 32-bit multilib arches first so the 64-bit ones can
# get a task ID and wait for them to complete
arglist = [tag, repo_id, arch, keys, task_opts]
subtasks[arch] = self.session.host.subtask(
method='createdistrepo', arglist=arglist, label=arch,
parent=self.id, arch='noarch')
if len(subtasks) > 0 and task_opts['multilib']:
self.wait(to_list(subtasks.values()), all=True, failany=True)
for arch in arch32s:
# move the 32-bit task output to the final resting place
# so the 64-bit arches can use it for multilib
upload_dir = koji.pathinfo.taskrelpath(subtasks[arch])
self.session.host.distRepoMove(repo_id, upload_dir, arch)
for arch in canonArches:
# do the other arches
if arch not in arch32s:
arglist = [tag, repo_id, arch, keys, task_opts]
subtasks[arch] = self.session.host.subtask(
method='createdistrepo', arglist=arglist, label=arch,
parent=self.id, arch='noarch')
# wait for 64-bit subtasks to finish
self.wait(to_list(subtasks.values()), all=True, failany=True)
for (arch, task_id) in six.iteritems(subtasks):
if task_opts['multilib'] and arch in arch32s:
# already moved above
continue
upload_dir = koji.pathinfo.taskrelpath(subtasks[arch])
self.session.host.distRepoMove(repo_id, upload_dir, arch)
self.session.host.repoDone(repo_id, {})
return 'Dist repository #%s successfully generated' % repo_id
class createDistRepoTask(BaseTaskHandler):
Methods = ['createdistrepo']
_taskWeight = 1.5
archmap = {'s390x': 's390', 'ppc64': 'ppc', 'x86_64': 'i686'}
compat = {"i386": ("athlon", "i686", "i586", "i486", "i386", "noarch"),
"x86_64": ("amd64", "ia32e", "x86_64", "noarch"),
"ia64": ("ia64", "noarch"),
"ppc": ("ppc", "noarch"),
"ppc64": ("ppc64p7", "ppc64pseries", "ppc64iseries", "ppc64", "noarch"),
"ppc64le": ("ppc64le", "noarch"),
"s390": ("s390", "noarch"),
"s390x": ("s390x", "noarch"),
"sparc": ("sparcv9v", "sparcv9", "sparcv8", "sparc", "noarch"),
"sparc64": ("sparc64v", "sparc64", "noarch"),
"alpha": ("alphaev6", "alphaev56", "alphaev5", "alpha", "noarch"),
"arm": ("arm", "armv4l", "armv4tl", "armv5tel", "armv5tejl", "armv6l", "armv7l",
"noarch"),
"armhfp": ("armv7hl", "armv7hnl", "noarch"),
"aarch64": ("aarch64", "noarch"),
"riscv64": ("riscv64", "noarch"),
"sw_64": ("sw_64", "noarch"),
"loongarch64": ("loongarch64", "noarch"),
"src": ("src",)
}
biarch = {"ppc": "ppc64", "x86_64": "i386", "sparc":
"sparc64", "s390x": "s390", "ppc64": "ppc"}
def handler(self, tag, repo_id, arch, keys, opts):
# arch is the arch of the repo, not the task
self.rinfo = self.session.repoInfo(repo_id, strict=True)
if self.rinfo['state'] != koji.REPO_INIT:
raise koji.GenericError("Repo %(id)s not in INIT state (got %(state)s)" % self.rinfo)
groupdata = os.path.join(
koji.pathinfo.distrepo(repo_id, self.rinfo['tag_name']),
'groups', 'comps.xml')
# set up our output dir
self.repodir = '%s/repo' % self.workdir
self.repo_files = []
koji.ensuredir(self.repodir)
self.subrepos = set()
# gather oldpkgs data if delta option in use
oldpkgs = []
if opts.get('delta'):
# should be a list of repo ids to delta against
for delta_repo_id in opts['delta']:
oldrepo = self.session.repoInfo(delta_repo_id, strict=True)
if not oldrepo['dist']:
raise koji.GenericError("Base repo for deltas must also "
"be a dist repo")
# regular repos don't actually have rpms, just pkglist
path = koji.pathinfo.distrepo(delta_repo_id, oldrepo['tag_name'])
if not os.path.exists(path):
raise koji.GenericError('Base drpm repo missing: %s' % path)
# note: since we're using the top level dir, this will handle
# split repos as well
oldpkgs.append(path)
oldrepo = self.session.getRepo(tag, dist=True, state=koji.REPO_READY)
oldrepodata = None
if oldrepo:
oldrepodir = koji.pathinfo.distrepo(oldrepo['id'], tag)
# sort out our package list(s)
self.uploadpath = self.getUploadDir()
self.get_rpms(tag, arch, keys, opts)
if opts['multilib'] and koji.arch.isMultiLibArch(arch):
self.do_multilib(arch, self.archmap[arch], opts['multilib'])
self.split_pkgs(opts)
self.write_kojipkgs()
self.write_pkglist()
self.link_pkgs()
# generate the repodata
if oldrepo:
oldrepodata = os.path.join(oldrepodir, arch, 'repodata')
self.do_createrepo(self.repodir, '%s/pkglist' % self.repodir,
groupdata, oldpkgs=oldpkgs, oldrepodata=oldrepodata,
zck=opts.get('zck'), zck_dict_dir=opts.get('zck_dict_dir'),
createrepo_skip_stat=opts.get('createrepo_skip_stat'))
for subrepo in self.subrepos:
if oldrepo:
oldrepodata = os.path.join(oldrepodir, arch, subrepo, 'repodata')
self.do_createrepo(
'%s/%s' % (self.repodir, subrepo),
'%s/%s/pkglist' % (self.repodir, subrepo),
groupdata, oldpkgs=oldpkgs, oldrepodata=oldrepodata,
logname='createrepo_%s' % subrepo,
zck=opts.get('zck'),
zck_dict_dir=opts.get('zck_dict_dir'))
if len(self.kojipkgs) == 0:
fn = os.path.join(self.repodir, "repodata", "EMPTY_REPO")
with open(fn, 'wt') as fp:
fp.write("This repo is empty because its tag has no content "
"for this arch\n")
self.run_callbacks('postCreateDistRepo', tag=tag, repodir=self.repodir,
repo_id=repo_id, arch=arch, keys=keys, opts=opts)
# upload repo files
self.upload_repo()
self.upload_repo_manifest()
def upload_repo_file(self, relpath):
"""Upload a file from the repo
relpath should be relative to self.repodir
"""
localpath = '%s/%s' % (self.repodir, relpath)
reldir = os.path.dirname(relpath)
if reldir:
uploadpath = "%s/%s" % (self.uploadpath, reldir)
fn = os.path.basename(relpath)
else:
uploadpath = self.uploadpath
fn = relpath
self.session.uploadWrapper(localpath, uploadpath, fn)
self.repo_files.append(relpath)
def upload_repo(self):
"""Traverse the repo and upload needed files
We omit the symlinks we made for the rpms
"""
for dirpath, dirs, files in os.walk(self.repodir):
reldir = os.path.relpath(dirpath, self.repodir)
for filename in files:
path = "%s/%s" % (dirpath, filename)
if os.path.islink(path):
continue
relpath = "%s/%s" % (reldir, filename)
self.upload_repo_file(relpath)
def upload_repo_manifest(self):
"""Upload a list of the repo files we've uploaded"""
fn = '%s/repo_manifest' % self.workdir
koji.dump_json(fn, self.repo_files, indent=4)
self.session.uploadWrapper(fn, self.uploadpath)
def do_createrepo(self, repodir, pkglist, groupdata, oldpkgs=None,
logname=None, oldrepodata=None, zck=False, zck_dict_dir=None,
createrepo_skip_stat=None):
"""Run createrepo
This is derived from CreaterepoTask.create_local_repo, but adapted to
our requirements here
:param bool|None createrepo_skip_stat: Override default set in kojid.conf. Note, that
in True variant could resulting repo contain
unexpected rpms.
"""
koji.ensuredir(repodir)
if self.options.use_createrepo_c:
cmd = ['/usr/bin/createrepo_c', '--error-exit-val']
else:
cmd = ['/usr/bin/createrepo']
if zck:
raise koji.GenericError("createrepo doesn't support zchunks")
if zck_dict_dir and not zck:
raise koji.GenericError("--zck-dict-dir makes no sense without --zck")
cmd.extend(['-vd', '-i', pkglist])
if groupdata and os.path.isfile(groupdata):
cmd.extend(['-g', groupdata])
if pkglist and oldrepodata and self.options.createrepo_update:
if not os.path.isdir(oldrepodata):
self.logger.warning("old repodata is missing: %s" % oldrepodata)
else:
datadir = os.path.join(repodir, 'repodata')
shutil.copytree(oldrepodata, datadir)
oldorigins = os.path.join(datadir, 'pkgorigins.gz')
if os.path.isfile(oldorigins):
# remove any previous origins file and rely on mergerepos
# to rewrite it (if we have external repos to merge)
os.unlink(oldorigins)
cmd.append('--update')
if createrepo_skip_stat is not None:
skip_stat = createrepo_skip_stat
else:
skip_stat = self.options.distrepo_skip_stat
if skip_stat:
cmd.append('--skip-stat')
if oldpkgs:
# generate delta-rpms
cmd.append('--deltas')
for op_dir in oldpkgs:
cmd.extend(['--oldpackagedirs', op_dir])
if zck:
cmd.append('--zck')
if zck_dict_dir:
zck_dict_dir = os.path.normpath(zck_dict_dir)
if os.path.isfile(zck_dict_dir):
raise koji.GenericError("zchunk dir path is file: %s" % zck_dict_dir)
if not os.path.isdir(zck_dict_dir):
raise koji.GenericError("zchunk dir path doesn't exist: %s" % zck_dict_dir)
cmd.extend(['--zck-dict-dir', zck_dict_dir])
cmd.append(repodir)
if logname is None:
logname = 'createrepo'
logfile = '%s/%s.log' % (self.workdir, logname)
status = log_output(self.session, cmd[0], cmd, logfile, self.getUploadDir(), logerror=True)
if not isSuccess(status):
raise koji.GenericError('failed to create repo: %s'
% parseStatus(status, format_shell_cmd(cmd)))
def do_multilib(self, arch, ml_arch, conf):
repodir = koji.pathinfo.distrepo(self.rinfo['id'], self.rinfo['tag_name'])
mldir = os.path.join(repodir, koji.canonArch(ml_arch))
ml_true = set() # multilib packages we need to include before depsolve
ml_conf = os.path.join(koji.pathinfo.work(), conf)
# read pkgs data from multilib repo
ml_pkgfile = os.path.join(mldir, 'kojipkgs')
ml_pkgs = koji.load_json(ml_pkgfile)
# step 1: figure out which packages are multilib (should already exist)
dnfbase = dnf.Base()
mlm = multilib.DevelMultilibMethod(ml_conf)
fs_missing = set()
for bnp in self.kojipkgs:
rpminfo = self.kojipkgs[bnp]
ppath = rpminfo['_pkgpath']
dnfbase.fill_sack(load_system_repo=False, load_available_repos=False)
po = dnfbase.sack.add_cmdline_package(ppath)
if mlm.select(po):
# we need a multilib package to be included
ml_bnp = bnp.replace(arch, self.archmap[arch])
ml_path = os.path.join(mldir, ml_bnp[0].lower(), ml_bnp)
# ^ XXX - should actually generate this
if ml_bnp not in ml_pkgs:
# not in our multilib repo
self.logger.error('%s (multilib) is not on the filesystem' % ml_path)
fs_missing.add(ml_path)
# we defer failure so can report all the missing deps
continue
ml_true.add(ml_path)
# step 2: set up architectures for dnf configuration
self.logger.info("Resolving multilib for %s using method devel" % arch)
dnfdir = os.path.join(self.workdir, 'dnf')
# TODO: unwind this arch mess
archlist = (arch, 'noarch')
transaction_arch = arch
archlist = archlist + self.compat[self.biarch[arch]]
best_compat = self.compat[self.biarch[arch]][0]
if koji.arch.archDifference(best_compat, arch) > 0:
transaction_arch = best_compat
dnfconfig = """
[main]
debuglevel=2
#pkgpolicy=newest
#exactarch=1
gpgcheck=0
#reposdir=/dev/null
#cachedir=/dnfcache
installroot=%s
#logfile=/dnf.log
[koji-%s]
name=koji multilib task
baseurl=file://%s
enabled=1
""" % (dnfdir, self.id, mldir)
os.makedirs(os.path.join(dnfdir, "dnfcache"))
os.makedirs(os.path.join(dnfdir, 'var/lib/rpm'))
# step 3: proceed with dnf config and set up
yconfig_path = os.path.join(dnfdir, 'dnf.conf-koji-%s' % arch)
with koji._open_text_file(yconfig_path, 'wt') as f:
f.write(dnfconfig)
self.session.uploadWrapper(yconfig_path, self.uploadpath,
os.path.basename(yconfig_path))
conf = dnf.conf.Conf()
conf.reposdir = [] # don't use system repos at all
conf.read(yconfig_path)
dnfbase = dnf.Base(conf)
if hasattr(koji.arch, 'ArchStorage'):
dnfbase.conf.arch = transaction_arch
else:
koji.arch.canonArch = transaction_arch
dnfbase.read_all_repos()
dnfbase.fill_sack(load_system_repo=False, load_available_repos=True)
for pkg in ml_true:
dnfbase.install(pkg)
# step 4: execute dnf transaction to get dependencies
self.logger.info("Resolving dependencies for arch %s" % arch)
ml_needed = {}
try:
dnfbase.resolve()
self.logger.info('dnf depsolve successfully finished')
for po in dnfbase.transaction.install_set:
bnp = os.path.basename(po.localPkg())
dep_path = os.path.join(mldir, bnp[0].lower(), bnp)
ml_needed[dep_path] = po
if not os.path.exists(dep_path):
self.logger.error('%s (multilib dep) not on filesystem' % dep_path)
fs_missing.add(dep_path)
except dnf.exceptions.DepsolveError:
self.logger.error('dnf depsolve was unsuccessful')
raise
if len(fs_missing) > 0:
missing_log = os.path.join(self.workdir, 'missing_multilib.log')
with koji._open_text_file(missing_log, 'wt') as outfile:
outfile.write('The following multilib files were missing:\n')
for ml_path in fs_missing:
outfile.write(ml_path + '\n')
self.session.uploadWrapper(missing_log, self.uploadpath)
raise koji.GenericError('multilib packages missing. '
'See missing_multilib.log')
# step 5: update kojipkgs
for dep_path in ml_needed:
tspkg = ml_needed[dep_path]
bnp = os.path.basename(dep_path)
if bnp in self.kojipkgs:
# we expect duplication with noarch, but not other arches
if tspkg.arch != 'noarch':
self.logger.warning("Multilib duplicate: %s", bnp)
continue
rpminfo = ml_pkgs[bnp].copy()
# fix _pkgpath, which comes from another task and could be wrong
# for us
# TODO: would be better if we could use the proper path here
rpminfo['_pkgpath'] = dep_path
rpminfo['_multilib'] = True
self.kojipkgs[bnp] = rpminfo
def pick_key(self, keys, avail_keys):
best = None
best_idx = None
for sigkey in avail_keys:
if sigkey not in keys:
# skip, not a key we are looking for
continue
idx = keys.index(sigkey)
# lower idx (earlier in list) is more preferrable
if best is None or best_idx > idx:
best = sigkey
best_idx = idx
return best
def get_rpms(self, tag_id, arch, keys, opts):
keys = [key.lower() for key in keys]
# get the rpm data
rpms = []
builddirs = {}
for a in self.compat[arch]:
# note: self.compat includes noarch for non-src already
rpm_iter, builds = self.session.listTaggedRPMS(tag_id,
event=opts['event'],
arch=a,
latest=opts['latest'],
inherit=opts['inherit'],
rpmsigs=True)
for build in builds:
# disable draft for distRepo so far
if build.get('draft'):
raise koji.BuildError("Draft build: %s is not allowed" % build['nvr'])
builddirs[build['id']] = koji.pathinfo.build(build)
rpms += list(rpm_iter)
# index by id and key
rpm_idx = {}
for rpminfo in rpms:
sigidx = rpm_idx.setdefault(rpminfo['id'], {})
sigidx[rpminfo['sigkey']] = rpminfo
# select our rpms
selected = {}
for rpm_id in rpm_idx:
avail_keys = [key.lower() for key in rpm_idx[rpm_id].keys()]
best_key = self.pick_key(keys, avail_keys)
if best_key is None:
# we lack a matching key for this rpm
fallback = avail_keys[0]
rpminfo = rpm_idx[rpm_id][fallback].copy()
rpminfo['sigkey'] = None
selected[rpm_id] = rpminfo
else:
selected[rpm_id] = rpm_idx[rpm_id][best_key]
selected[rpm_id]['best_key'] = best_key
# write signed rpms
log_output = ''
if opts.get('write_signed_rpms'):
results = []
rpm_with_key = []
with self.session.multicall(batch=1000) as m:
for rpm_id in selected:
if selected[rpm_id].get('best_key'):
results.append(m.host.writeSignedRPM(rpm_id, selected[rpm_id]['best_key']))
rpm_with_key.append(rpm_id)
for rpm_id, r in zip(rpm_with_key, results):
if isinstance(r._result, list):
log_output += 'Signed RPM %s is written with %s key.\n' \
% (rpm_id, selected[rpm_id]['best_key'])
else:
log_output += 'FAILED: Signed RPM %s is not written with %s key, ' \
'error: %s\n' % (rpm_id, selected[rpm_id]['best_key'],
r._result['faultString'])
if log_output:
written_log = os.path.join(self.workdir, 'written_signed_rpms.log')
with koji._open_text_file(written_log, 'at') as outfile:
outfile.write(log_output)
self.session.uploadWrapper(written_log, self.uploadpath)
# generate kojipkgs data and note missing files
fs_missing = []
sig_missing = []
kojipkgs = {}
for rpm_id in selected:
rpminfo = selected[rpm_id]
if rpminfo['sigkey'] is None:
sig_missing.append(rpm_id)
if opts['skip_missing_signatures']:
continue
# use the primary copy, if allowed (checked below)
pkgpath = '%s/%s' % (builddirs[rpminfo['build_id']],
koji.pathinfo.rpm(rpminfo))
else:
# use the signed copy
pkgpath = '%s/%s' % (builddirs[rpminfo['build_id']],
koji.pathinfo.signed(rpminfo, rpminfo['sigkey']))
if not os.path.exists(pkgpath):
fs_missing.append(pkgpath)
# we'll raise an error below
else:
bnp = os.path.basename(pkgpath)
rpminfo['_pkgpath'] = pkgpath
kojipkgs[bnp] = rpminfo
self.kojipkgs = kojipkgs
# report problems
if len(fs_missing) > 0:
missing_log = os.path.join(self.workdir, 'missing_files.log')
with koji._open_text_file(missing_log, 'wt') as outfile:
outfile.write('Some rpm files were missing.\n'
'Most likely, you want to create these signed copies.\n\n'
'Missing files:\n')
for pkgpath in sorted(fs_missing):
outfile.write(pkgpath)
outfile.write('\n')
self.session.uploadWrapper(missing_log, self.uploadpath)
raise koji.GenericError('Packages missing from the filesystem. '
'See missing_files.log.')
if sig_missing:
# log missing signatures and possibly error
missing_log = os.path.join(self.workdir, 'missing_signatures.log')
with koji._open_text_file(missing_log, 'wt') as outfile:
outfile.write('Some rpms were missing requested signatures.\n')
if opts['skip_missing_signatures']:
outfile.write('The skip_missing_signatures option was specified, so '
'these files were excluded.\n')
outfile.write('Requested keys: %r\n\n' % keys)
outfile.write('# RPM name: available keys\n')
fmt = '%(name)s-%(version)s-%(release)s.%(arch)s'
filenames = [[fmt % selected[r], r] for r in sig_missing]
for fname, rpm_id in sorted(filenames):
avail = to_list(rpm_idx.get(rpm_id, {}).keys())
outfile.write('%s: %r\n' % (fname, avail))
self.session.uploadWrapper(missing_log, self.uploadpath)
if (not opts['skip_missing_signatures'] and
not opts['allow_missing_signatures']):
raise koji.GenericError('Unsigned packages found. See '
'missing_signatures.log')
def link_pkgs(self):
for bnp in self.kojipkgs:
bnplet = bnp[0].lower()
ddir = os.path.join(self.repodir, 'Packages', bnplet)
koji.ensuredir(ddir)
dst = os.path.join(ddir, bnp)
pkgpath = self.kojipkgs[bnp]['_pkgpath']
self.logger.debug("os.symlink(%r, %r(", pkgpath, dst)
os.symlink(pkgpath, dst)
def split_pkgs(self, opts):
'''Direct rpms to subrepos if needed'''
for rpminfo in self.kojipkgs.values():
if opts.get('split_debuginfo') and koji.is_debuginfo(rpminfo['name']):
rpminfo['_subrepo'] = 'debug'
self.subrepos.add('debug')
def write_pkglist(self):
pkgs = []
subrepo_pkgs = {}
for bnp in self.kojipkgs:
rpminfo = self.kojipkgs[bnp]
bnplet = bnp[0].lower()
subrepo = rpminfo.get('_subrepo')
if subrepo:
# note the ../
subrepo_pkgs.setdefault(subrepo, []).append(
'../Packages/%s/%s\n' % (bnplet, bnp))
else:
pkgs.append('Packages/%s/%s\n' % (bnplet, bnp))
with koji._open_text_file('%s/pkglist' % self.repodir, 'wt') as fo:
for line in pkgs:
fo.write(line)
for subrepo in subrepo_pkgs:
koji.ensuredir('%s/%s' % (self.repodir, subrepo))
with koji._open_text_file('%s/%s/pkglist' % (self.repodir, subrepo), 'wt') as fo:
for line in subrepo_pkgs[subrepo]:
fo.write(line)
def write_kojipkgs(self):
filename = os.path.join(self.repodir, 'kojipkgs')
koji.dump_json(filename, self.kojipkgs, sort_keys=False)
def get_options():
"""process options from command line and config file"""
# parse command line args
logger = logging.getLogger("koji.build")
parser = OptionParser()
parser.add_option("-c", "--config", dest="configFile",
help="use alternate configuration file", metavar="FILE",
default="/etc/kojid/kojid.conf")
parser.add_option("--user", help="specify user")
parser.add_option("--password", help="specify password")
parser.add_option("-f", "--fg", dest="daemon",
action="store_false", default=True,
help="run in foreground")
parser.add_option("--force-lock", action="store_true", default=False,
help="force lock for exclusive session")
parser.add_option("-v", "--verbose", action="store_true", default=False,
help="show verbose output")
parser.add_option("-d", "--debug", action="store_true", default=False,
help="show debug output")
parser.add_option("--debug-task", action="store_true", default=False,
help="enable debug output for tasks")
parser.add_option("--debug-xmlrpc", action="store_true", default=False,
help="show xmlrpc debug output")
parser.add_option("--debug-mock", action="store_true", default=False,
# obsolete option
help=SUPPRESS_HELP)
parser.add_option("--skip-main", action="store_true", default=False,
help="don't actually run main")
parser.add_option("--maxjobs", type='int', help="Specify maxjobs")
parser.add_option("--minspace", type='int', help="Specify minspace")
parser.add_option("--sleeptime", type='int', help="Specify the polling interval")
parser.add_option("--admin-emails", type='str', action="store", metavar="EMAILS",
help="Comma-separated addresses to send error notices to.")
parser.add_option("--topdir", help="Specify topdir")
parser.add_option("--topurl", help="Specify topurl")
parser.add_option("--workdir", help="Specify workdir")
parser.add_option("--chroot-tmpdir", help="Specify tmpdir in buildroot")
parser.add_option("--pluginpath", help="Specify plugin search path")
parser.add_option("--plugin", action="append", help="Load specified plugin")
parser.add_option("--mockdir", help="Specify mockdir")
parser.add_option("--mockuser", help="User to run mock as")
parser.add_option("-s", "--server", help="url of XMLRPC server")
parser.add_option("--pkgurl", help=SUPPRESS_HELP)
(options, args) = parser.parse_args()
if args:
parser.error("incorrect number of arguments")
# not reached
assert False # pragma: no cover
# load local config
config = koji.read_config_files(options.configFile, raw=True)
for x in config.sections():
if x != 'kojid':
quit('invalid section found in config file: %s' % x)
defaults = {'sleeptime': 15,
'maxjobs': 10,
'buildroot_basic_cleanup_delay': 120,
'buildroot_final_cleanup_delay': 86400,
'literal_task_arches': '',
'minspace': 8192,
'admin_emails': None,
'log_level': None,
'topdir': '/mnt/koji',
'topurl': None,
'workdir': '/var/tmp/koji',
'chroot_tmpdir': '/chroot_tmpdir',
'pluginpath': '/usr/lib/koji-builder-plugins',
'mockdir': '/var/lib/mock',
'mockuser': 'kojibuilder',
'packager': 'Koji',
'vendor': 'Koji',
'distribution': 'Koji',
'mockhost': 'koji-linux-gnu',
'smtphost': 'example.com',
'smtp_user': None,
'smtp_pass': None,
'from_addr': 'Koji Build System <buildsys@example.com>',
'krb_principal': None,
'host_principal_format': 'compile/%s@EXAMPLE.COM',
'keytab': '/etc/kojid/kojid.keytab',
'ccache': '/var/tmp/kojid.ccache',
'server': None,
'user': None,
'password': None,
'retry_interval': 60,
'max_retries': 120,
'offline_retry': True,
'offline_retry_interval': 120,
'log_timestamps': False,
'timeout': None,
'no_ssl_verify': False,
'use_fast_upload': True,
'use_createrepo_c': True,
'createrepo_skip_stat': True,
'createrepo_update': True,
'distrepo_skip_stat': False,
'copy_old_repodata': False,
'mock_bootstrap_image': False,
'pkgurl': None,
'allowed_scms': '',
'allowed_scms_use_config': True,
'allowed_scms_use_policy': False,
'scm_credentials_dir': None,
'support_rpm_source_layout': True,
'yum_proxy': None,
'maven_repo_ignore': '*.md5 *.sha1 maven-metadata*.xml _maven.repositories '
'_remote.repositories resolver-status.properties '
'*.lastUpdated',
'failed_buildroot_lifetime': 3600 * 4,
'rpmbuild_timeout': 3600 * 24,
'oz_install_timeout': 0,
'xz_options': '-z6T0',
'task_avail_delay': 300,
'cert': None,
'serverca': None,
'allow_noverifyssl': False,
'allow_password_in_scm_url': False}
if config.has_section('kojid'):
for name, value in config.items('kojid'):
if name in ['sleeptime', 'maxjobs', 'minspace', 'retry_interval',
'max_retries', 'offline_retry_interval', 'failed_buildroot_lifetime',
'timeout', 'rpmbuild_timeout', 'oz_install_timeout',
'task_avail_delay', 'buildroot_basic_cleanup_delay',
'buildroot_final_cleanup_delay']:
try:
defaults[name] = int(value)
except ValueError:
quit("value for %s option must be a valid integer" % name)
elif name in ['offline_retry', 'use_createrepo_c', 'createrepo_skip_stat',
'createrepo_update', 'use_fast_upload', 'support_rpm_source_layout',
'build_arch_can_fail', 'no_ssl_verify', 'log_timestamps',
'allow_noverifyssl', 'allowed_scms_use_config',
'allowed_scms_use_policy', 'allow_password_in_scm_url',
'distrepo_skip_stat', 'copy_old_repodata']:
defaults[name] = config.getboolean('kojid', name)
elif name in ['plugin', 'plugins']:
defaults['plugin'] = value.split()
elif name in to_list(defaults.keys()):
defaults[name] = value
elif name.upper().startswith('RLIMIT_'):
defaults[name.upper()] = value
else:
quit("unknown config option: %s" % name)
for name, value in defaults.items():
if getattr(options, name, None) is None:
setattr(options, name, value)
# honor topdir
if options.topdir:
koji.BASEDIR = options.topdir
koji.pathinfo.topdir = options.topdir
# make sure workdir exists
if not os.path.exists(options.workdir):
koji.ensuredir(options.workdir)
if not options.server:
msg = "the server option is required"
logger.error(msg)
parser.error(msg)
if not options.topurl:
msg = "the topurl option is required"
logger.error(msg)
parser.error(msg)
topurls = options.topurl.split()
options.topurls = topurls
if len(topurls) > 1:
# XXX - fix the rest of the code so this is not necessary
options.topurl = topurls[0]
if options.pkgurl:
logger.warning("The pkgurl option is obsolete")
if options.debug_mock:
logger.warning("The debug-mock option is obsolete")
# special handling for cert defaults
cert_defaults = {
'cert': '/etc/kojid/client.crt',
'serverca': '/etc/kojid/serverca.crt',
}
for name in cert_defaults:
if getattr(options, name, None) is None:
fn = cert_defaults[name]
if os.path.exists(fn):
setattr(options, name, fn)
return options
def quit(msg=None, code=1):
if msg:
logging.getLogger("koji.build").error(msg)
sys.stderr.write('%s\n' % msg)
sys.stderr.flush()
sys.exit(code)
if __name__ == "__main__":
koji.add_file_logger("koji", "/var/log/kojid.log")
# note we're setting logging params for all of koji*
options = get_options()
if options.log_level:
lvl = getattr(logging, options.log_level, None)
if lvl is None:
quit("Invalid log level: %s" % options.log_level)
logging.getLogger("koji").setLevel(lvl)
else:
logging.getLogger("koji").setLevel(logging.WARN)
if options.debug:
logging.getLogger("koji").setLevel(logging.DEBUG)
elif options.verbose:
logging.getLogger("koji").setLevel(logging.INFO)
if options.debug_task:
logging.getLogger("koji.build.BaseTaskHandler").setLevel(logging.DEBUG)
if options.admin_emails:
koji.add_mail_logger("koji", options.admin_emails)
# start a session and login
session_opts = koji.grab_session_options(options)
glob_session = koji.ClientSession(options.server, session_opts)
if options.cert and os.path.isfile(options.cert):
try:
# authenticate using SSL client certificates
glob_session.ssl_login(options.cert, None, options.serverca)
except koji.AuthError as e:
quit("Error: Unable to log in: %s" % e)
except requests.exceptions.ConnectionError:
quit("Error: Unable to connect to server %s" % (options.server))
elif options.user:
try:
# authenticate using user/password
glob_session.login()
except koji.AuthError:
quit("Error: Unable to log in. Bad credentials?")
except requests.exceptions.ConnectionError:
quit("Error: Unable to connect to server %s" % (options.server))
elif reqgssapi:
krb_principal = options.krb_principal
if krb_principal is None:
krb_principal = options.host_principal_format % socket.getfqdn()
try:
# Check ccache is not empty or authentication will fail
if os.path.exists(options.ccache) and os.stat(options.ccache).st_size == 0:
os.remove(options.ccache)
glob_session.gssapi_login(principal=krb_principal,
keytab=options.keytab,
ccache=options.ccache)
except Krb5Error as e:
quit("Kerberos authentication failed: %s" % e.args)
except socket.error as e:
quit("Could not connect to Kerberos authentication service: '%s'" % e.args[1])
else:
quit("No username/password/certificate supplied and Kerberos missing or not configured")
# make session exclusive
try:
glob_session.exclusiveSession(force=options.force_lock)
except koji.AuthLockError:
quit("Error: Unable to get lock. Trying using --force-lock")
if not glob_session.logged_in:
quit("Error: Unknown login error")
# make sure it works
try:
ret = glob_session.echo("OK")
except requests.exceptions.ConnectionError:
quit("Error: Unable to connect to server %s" % (options.server))
if ret != ["OK"]:
quit("Error: incorrect server response: %r" % (ret))
# run main
if options.daemon:
# detach
koji.daemonize()
main(options, glob_session)
# not reached
assert False # pragma: no cover
elif not options.skip_main:
koji.add_stderr_logger("koji")
main(options, glob_session)