Compare commits

...

484 Commits

Author SHA1 Message Date
pycook 8217053abf release: v2.3.5 2023-10-09 20:55:30 +08:00
pycook 8c17373e45 feat: get messenger url from common setting 2023-10-09 20:25:27 +08:00
wang-liang0615 a8e2595327
前端更新 (#192)
* fix:add package

* fix:notice_info为null的情况

* fix:2 bugs

* feat:1.common增加通知配置 2.cmdb预定义值webhook&其他模型

* fix:json 不支持预定义值

* fix:json 不支持预定义值

* fix:删除代码
2023-10-09 19:52:19 +08:00
simontigers dfbba103cd
fix: init company structure resource (#191)
* fix: init company structure resource

* fix: notice_info null
2023-10-09 19:25:49 +08:00
wang-liang0615 86b9d5a7f4
前端更新 (#189)
* fix:add package

* fix:notice_info为null的情况

* fix:2 bugs

* feat:1.common增加通知配置 2.cmdb预定义值webhook&其他模型

* fix:json 不支持预定义值

* fix:json 不支持预定义值
2023-10-09 17:43:34 +08:00
simontigers 612922a1b7
feat: notice_config access messenger (#190) 2023-10-09 17:32:20 +08:00
pycook 2758c5e468 fix: delete user role 2023-10-09 15:40:18 +08:00
pycook d85c86a839 feat: The definition of attribute choice values supports webhook and other model attribute values. 2023-10-09 15:33:18 +08:00
wang-liang0615 8355137e43
Dev UI (#186)
* fix:add package

* fix:notice_info为null的情况

* fix:2 bugs
2023-09-28 09:45:11 +08:00
pycook 2e644233bc release 2.3.4 2023-09-27 11:37:18 +08:00
simontigers d9b4082b46
feat: add api get_notice_by_ids (#184) 2023-09-27 09:54:30 +08:00
wang-liang0615 a07f984152
前端更新 (#183)
* fix:add package

* fix:notice_info为null的情况
2023-09-27 09:18:33 +08:00
pycook 4cab7ef6b0 feat: ci triggers 2023-09-26 21:18:34 +08:00
wang-liang0615 070c163de6
fix:add package (#182) 2023-09-26 21:12:10 +08:00
wang-liang0615 282a779fb1
Merge pull request #181 from veops/dev_ui
前端更新
2023-09-26 20:34:27 +08:00
wang-liang0615 cb6b51a84c
Merge branch 'master' into dev_ui 2023-09-26 20:34:14 +08:00
wang-liang0615 34bd320e75 fix:topo图相同节点出现两次的bug 2023-09-26 20:13:12 +08:00
wang-liang0615 1eca5791f6 feat:wangeditor 注册自定义组件 2023-09-26 20:07:00 +08:00
wang-liang0615 13b1c9a30c delete:删除getwx 2023-09-26 20:04:38 +08:00
simontigers b1a15a85d2
feat: common notice config (#180) 2023-09-26 19:44:20 +08:00
wang-liang0615 08e5a02caf
feat: UI更新 触发器 (#179)
* feat:新增api&适配

* feat:触发器

* add packages & 注释代码

* feat: webhook tips
2023-09-26 18:25:04 +08:00
wang-liang0615 308827b8fc feat: webhook tips 2023-09-26 18:17:23 +08:00
wang-liang0615 dc4ccb22b9 add packages & 注释代码 2023-09-26 17:35:41 +08:00
wang-liang0615 c482e7ea43 feat:触发器 2023-09-26 17:01:31 +08:00
wang-liang0615 663c14f763 feat:新增api&适配 2023-09-26 16:26:25 +08:00
pycook c6ee227bab fix: ci_cache 2023-09-25 15:46:07 +08:00
wang-liang0615 cb62cf2410
Merge pull request #178 from veops/dev_ui
前端更新:仪表盘优化
2023-09-25 14:52:09 +08:00
wang-liang0615 133f32a6b0 pref:仪表盘优化 2023-09-25 14:50:08 +08:00
wang-liang0615 45c48c86fe Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-09-25 14:43:34 +08:00
pycook 2321f17dae refactor: CI triggers 2023-09-22 17:39:54 +08:00
simontigers ddb31a07a2
fix: icon svg support (#177) 2023-09-20 15:56:57 +08:00
pycook b474914fbb fix date search 2023-09-18 18:15:02 +08:00
pycook 26099a3d69 fix dashboard compute 2023-09-18 13:04:50 +08:00
pycook 62829c885b release v2.3.3 2023-09-15 17:57:39 +08:00
pycook 260aed6462 dashboard ui update 2023-09-15 17:36:10 +08:00
simontigers 3841999cca
feat: init resource for backend (#176) 2023-09-15 15:30:30 +08:00
pycook 14c03ce5d2 enhance dashboard 2023-09-15 15:26:20 +08:00
pycook f463ecd6e6 cmdb-api/api/lib/resp_format.py 2023-09-12 20:01:30 +08:00
pycook adc0cfd5c5 Detect circular dependencies when adding CIType relationships 2023-09-12 20:00:56 +08:00
wang-liang0615 086481657e
计算属性 触发计算 (#174) 2023-09-11 19:16:05 +08:00
pycook d2f84ae3dc fix upload template and add /api/v0.1/attributes/<int:attr_id>/calc_computed_attribute 2023-09-11 19:15:31 +08:00
wang-liang0615 9f1b510cb3 Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-09-11 17:35:44 +08:00
wang-liang0615 61acb2483d 计算属性 触发计算 2023-09-11 17:34:51 +08:00
pycook 0196c8a82c release 2.3.2 2023-09-07 13:44:51 +08:00
pycook bed2323fc1
Merge pull request #172 from veops/dev_ui
新建ci及批量导入时,新建关系
2023-09-07 11:04:49 +08:00
wang-liang0615 be9b308f56 新建ci及批量导入时,新建关系 2023-09-07 10:25:18 +08:00
pycook 8ba658ea1b Merge branch 'master' of github.com:veops/cmdb 2023-09-07 10:12:55 +08:00
pycook 0aa668cfa0 Add CI relationship when creating CI, the text value removes the escape 2023-09-07 10:12:42 +08:00
pycook e20fd33a53
Merge pull request #171 from ronething/fix/makefile
optimize: makefile help
2023-09-05 20:34:16 +08:00
ashing 7462de63de
fix: review
Signed-off-by: ashing <axingfly@gmail.com>
2023-09-05 20:33:07 +08:00
ashing 5f9ba069ad
fix: review
Signed-off-by: ashing <axingfly@gmail.com>
2023-09-05 20:29:29 +08:00
ashing 5dc0d95ff8
fix: review
Signed-off-by: ashing <axingfly@gmail.com>
2023-09-05 20:21:20 +08:00
ashing e5536b76e6
optimize: makefile help
Signed-off-by: ashing <axingfly@gmail.com>
2023-09-05 20:06:31 +08:00
pycook 8b044efd4e
Merge pull request #170 from ronething/feat/xx
feat: support docker deploy mysql and redis
2023-09-05 19:28:47 +08:00
ivonGwy 747b5bf494
Merge pull request #169 from veops/doc
add document link
2023-09-05 15:41:52 +08:00
ivonGwy 21067022f6 add document link 2023-09-05 15:40:31 +08:00
ashing 4102c44fb2
feat: support docker deploy mysql and redis
Signed-off-by: ashing <axingfly@gmail.com>
2023-09-05 15:26:50 +08:00
wang-liang0615 600f95ce18
Merge pull request #168 from veops/dev_ui
UI更新
2023-09-05 15:23:43 +08:00
wang-liang0615 950fd38044 Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-09-05 15:22:18 +08:00
wang-liang0615 01085615b5 模型关联 展示反向关系 2023-09-05 15:22:08 +08:00
pycook 734f1940f9 Merge branch 'master' of github.com:veops/cmdb 2023-09-05 14:49:53 +08:00
pycook c25c1e4e4b move Dockerfile to docs 2023-09-05 14:49:34 +08:00
wang-liang0615 826a8306d3
Merge pull request #167 from veops/dev_ui
sub menu color
2023-09-04 16:34:26 +08:00
wang-liang0615 740aae573e sub menu color 2023-09-04 16:33:35 +08:00
wang-liang0615 17828a7631
Merge pull request #166 from veops/dev_ui
ui更新
2023-09-04 13:15:27 +08:00
wang-liang0615 02cb497bdc Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-09-04 13:14:35 +08:00
wang-liang0615 05a7dc41ee sidebar 2023-09-04 13:14:11 +08:00
pycook 459c70ba2d import format 2023-09-02 12:09:41 +08:00
pycook 774f42ac34 format 2023-09-01 18:07:44 +08:00
pycook 420029a5e2 fix delete choice values 2023-08-31 16:02:24 +08:00
pycook ab8acbfd20 fix delete choice values 2023-08-31 15:18:15 +08:00
wang-liang0615 4468b6a8de
Merge pull request #165 from veops/dev_ui
proxy
2023-08-31 13:31:26 +08:00
wang-liang0615 6bf145d085 proxy 2023-08-31 13:28:15 +08:00
pycook 42b1e47e76
Merge pull request #162 from simontigers/cmdb_icon_manage
feat: add cmdb custom icon manage
2023-08-31 11:15:09 +08:00
pycook 673134003a
Merge pull request #163 from veops/dev_ui
支持上传自定义图标
2023-08-31 11:14:42 +08:00
hu.sima ef67885571 feat: add cmdb custom icon manage 2023-08-31 10:49:56 +08:00
wang-liang0615 075bf7217f 支持上传自定义图标 2023-08-31 10:05:11 +08:00
pycook 3b7b8f435c fix update attribute 2023-08-30 13:34:10 +08:00
pycook 2b7f6aeef3 Merge branch 'master' of github.com:veops/cmdb 2023-08-29 14:49:21 +08:00
pycook 544fac8aca The default value of USE_ACL is set to True 2023-08-29 14:49:09 +08:00
pycook 3d0a56ec8c
Merge pull request #161 from simontigers/common_setting_format
fix: company info create
2023-08-29 11:01:25 +08:00
hu.sima d2d8482052 fix: company info create 2023-08-29 10:56:48 +08:00
pycook a0afae8d2e Merge branch 'master' of github.com:veops/cmdb 2023-08-25 11:01:24 +08:00
pycook 9f3da68636 update ad_ci when deleting ci 2023-08-25 10:59:38 +08:00
wang-liang0615 24b955c288
Merge pull request #160 from veops/dev_ui
前端更新
2023-08-25 10:12:31 +08:00
wang-liang0615 a07b2d37ec fix 新增类型回车键发送两次请求 2023-08-25 10:11:09 +08:00
wang-liang0615 c86fcb4e7b fix 新增类型回车键发送两次请求 2023-08-25 10:08:04 +08:00
pycook ca7964f24b
Merge pull request #158 from EvanSung/perf_20230824_optimize_ad_ci_relation
perf(ad_ci_relation): optimize ad_ci relation
2023-08-24 16:26:43 +08:00
EvanSung c42ac634fb perf(ad_ci_relation): optimize ad_ci relation 2023-08-24 14:16:12 +08:00
pycook a6fc3341ce docker-compose add flask db-setup 2023-08-24 11:32:09 +08:00
pycook fc3f2e25f3 vxe-table-plugin-export-xlsx==2.0.0 2023-08-24 11:06:28 +08:00
pycook 511a5f70c6 add config CACHE_REDIS_PASSWORD and fix delete ci_type 2023-08-23 18:05:28 +08:00
pycook f8ff4d5e45 fix update ci 2023-08-22 11:34:40 +08:00
pycook 3ab72cceaf Register api and commands with absolute paths 2023-08-21 20:08:23 +08:00
pycook 4ab7e3c70c fix merge conflict 2023-08-21 11:55:49 +08:00
pycook a7fe75f7df fix g.user 2023-08-21 11:54:33 +08:00
pycook 3474a71a75 version: 2.3.1 2023-08-20 11:24:53 +08:00
pycook 6531baff64 lint 2023-08-20 11:23:55 +08:00
pycook ed5936250f
Merge pull request #157 from EvanSung/fix_20230817_guser_issue
fix(acl): g user issue
2023-08-17 22:12:45 +08:00
EvanSung 52c32e2ab1 fix(acl): g user issue 2023-08-17 18:40:45 +08:00
pycook d3224625b6 fix MyJSONEncoder 2023-08-16 21:28:27 +08:00
pycook f158c7e33a
Merge pull request #155 from veops/dev_ui
前端更新
2023-08-16 13:01:13 +08:00
pycook 6dc12bb6ac Merge branch 'master' of github.com:veops/cmdb 2023-08-16 13:00:44 +08:00
pycook b33ae16c00 Delete user without soft delete 2023-08-16 13:00:30 +08:00
wang-liang0615 2caffc2670 Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-08-16 10:09:47 +08:00
wang-liang0615 f28af51007 delete user 2023-08-16 10:09:25 +08:00
pycook 3a0369559f
Merge pull request #154 from simontigers/common_setting_format
fix: init-import-user-from-acl
2023-08-15 20:47:36 +08:00
hu.sima a74a2c5a94 fix: init-import-user-from-acl 2023-08-15 20:45:28 +08:00
pycook 9fbcb2838e
Merge pull request #153 from simontigers/common_setting_format
fix: import_user_from_acl
2023-08-15 20:25:09 +08:00
hu.sima 60a445b972 fix: import_user_from_acl 2023-08-15 20:19:45 +08:00
pycook bfdd7b6a0e Merge branch 'master' of github.com:veops/cmdb 2023-08-15 19:48:11 +08:00
pycook ab093d2493 [update] delete roles, users, attributes 2023-08-15 19:47:59 +08:00
wang-liang0615 315a578a31
Merge pull request #152 from veops/dev_ui
前端更新
2023-08-15 19:47:03 +08:00
wang-liang0615 1e16dc5e5b 属性库 2023-08-15 19:34:17 +08:00
wang-liang0615 f67e196acf 属性库 2023-08-15 19:26:49 +08:00
wang-liang0615 439e25d5dd 属性库 2023-08-15 19:21:09 +08:00
wang-liang0615 ea59c0d71f 属性库 2023-08-15 19:10:26 +08:00
wang-liang0615 1137127aab 后台管理-模型关联 关系删除&&筛选 2023-08-15 15:02:46 +08:00
pycook 4ad1b5282e update gitattributes 2023-08-15 13:41:45 +08:00
wang-liang0615 cdd5e4d9aa
Merge pull request #150 from EvanSung/optimize_20230810_acl_resource_fe
refactor(fe): reduce the width of resource mgt table
2023-08-10 19:32:49 +08:00
pycook 432de5e847
Merge pull request #148 from simontigers/common_setting_format
fix: default arg value
2023-08-10 19:31:18 +08:00
pycook 3a2339765a
Merge pull request #149 from veops/dev_ui
ui更新:password
2023-08-10 19:28:24 +08:00
EvanSung b5a2af7420 refactor(fe): reduce the width of resource mgt table 2023-08-10 19:23:41 +08:00
wang-liang0615 8b267613d6 Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-08-10 19:21:28 +08:00
wang-liang0615 b365eb27f6 增加密码明文传输 2023-08-10 19:21:10 +08:00
hu.sima 2125f020b5 fix: default arg value 2023-08-10 19:05:56 +08:00
pycook ea762e35a0
Merge pull request #147 from simontigers/common_setting_format
fix: remove useless
2023-08-10 19:01:25 +08:00
hu.sima f11aadf6d4 fix: remove useless 2023-08-10 18:55:32 +08:00
pycook 9cbf133b9f
Merge pull request #146 from simontigers/common_setting_format
Common setting format
2023-08-10 18:23:24 +08:00
hu.sima 95e8f9de74 fix: remove unused column 2023-08-10 16:29:52 +08:00
hu.sima 26792147ae style: format common setting 2023-08-10 15:30:01 +08:00
pycook 4f9b581c2e
Merge pull request #145 from EvanSung/optimize_20230810_auth_require
optimize(auth): auth request json
2023-08-10 11:24:23 +08:00
EvanSung e2b1cb3003 optimize(auth): auth request json 2023-08-10 10:43:59 +08:00
pycook f75a85b48a fix celery config 2023-08-08 16:33:24 +08:00
pycook 313fc80e54 Merge branch 'master' of github.com:veops/cmdb 2023-08-08 13:16:14 +08:00
pycook e0666689e5 upgrade celery 2023-08-08 13:16:07 +08:00
pycook 7a9fd4f9d6
Merge pull request #144 from veops/dev_ui
UI更新:fix preferenceList=>attrList
2023-08-08 09:21:10 +08:00
wang-liang0615 2fd706be85 Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-08-08 09:11:24 +08:00
wang-liang0615 3df51bb670 fix preferenceList=>attrList 2023-08-08 09:11:03 +08:00
pycook 9bbbcbe6dc upgrade flask to 2.3.2 and replace g.user with current_user 2023-08-06 21:54:18 +08:00
pycook 16d6b40e8d
Merge pull request #138 from lovvvve/fix_ldap
fix ldap login
2023-08-04 11:31:58 +08:00
pycook ef2d3812a2
Merge pull request #142 from veops/dev_ui
ci 批量更新和删除的异步处理
2023-08-04 09:27:55 +08:00
wang-liang0615 bc653efd04 Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-08-03 16:54:47 +08:00
wang-liang0615 d891d7365d ci 批量更新和删除的异步处理 2023-08-03 16:54:27 +08:00
pycook 9953b2fc98
Merge pull request #139 from EvanSung/fix-post-acltrigger-session-invalid
fix(trigger): session invalid issue
2023-08-02 19:33:05 +08:00
songbing01249 8de54812dc fix(trigger): session invalid issue 2023-08-02 18:22:42 +08:00
lovvvve eb7d52cf35 fix ldap login 2023-08-01 11:27:29 +00:00
pycook 6c4a5f2f6b
Merge pull request #134 from veops/dependabot/pip/cmdb-api/pillow-9.3.0
Bump pillow from 9.2.0 to 9.3.0 in /cmdb-api
2023-08-01 15:57:02 +08:00
pycook 17c5d4538b
Merge pull request #135 from simontigers/remove_pandas
fix: remove pandas
2023-08-01 15:55:15 +08:00
hu.sima 6c3e3f9eed fix: remove pandas 2023-08-01 15:32:44 +08:00
dependabot[bot] b0494adc17
Bump pillow from 9.2.0 to 9.3.0 in /cmdb-api
Bumps [pillow](https://github.com/python-pillow/Pillow) from 9.2.0 to 9.3.0.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](https://github.com/python-pillow/Pillow/compare/9.2.0...9.3.0)

---
updated-dependencies:
- dependency-name: pillow
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-01 05:51:16 +00:00
pycook fc133f2ae9 Merge branch 'master' of github.com:veops/cmdb 2023-08-01 13:47:34 +08:00
pycook ac6e3a0318 fix dependabot alerts 2023-08-01 13:47:11 +08:00
pycook 404ec976cc fix dependabot alerts 2023-08-01 13:46:47 +08:00
pycook 4211bbcbc9
Merge pull request #130 from veops/dependabot/pip/cmdb-api/certifi-2023.7.22
Bump certifi from 2023.5.7 to 2023.7.22 in /cmdb-api
2023-08-01 13:14:25 +08:00
pycook 0158636671
Merge pull request #132 from veops/dev_ui
删除角色相关
2023-07-31 19:54:19 +08:00
wang-liang0615 d986bc3bbc 删除角色相关 2023-07-31 19:52:06 +08:00
pycook 044b820548 Merge branch 'master' of github.com:veops/cmdb 2023-07-31 18:39:46 +08:00
pycook 536daa6d4f fix delete ci_type 2023-07-31 18:39:33 +08:00
pycook b0620b043b
Merge pull request #131 from veops/dev_ui
前端acl
2023-07-28 18:03:36 +08:00
wang-liang0615 a88c9cf7f7 common-setting 2023-07-27 15:47:13 +08:00
wang-liang0615 be50f505d1 acl 2023-07-27 15:30:27 +08:00
wang-liang0615 0bb4f633d6 fix acl change page size 2023-07-27 15:08:25 +08:00
dependabot[bot] 78b521f3af
Bump certifi from 2023.5.7 to 2023.7.22 in /cmdb-api
Bumps [certifi](https://github.com/certifi/python-certifi) from 2023.5.7 to 2023.7.22.
- [Commits](https://github.com/certifi/python-certifi/compare/2023.05.07...2023.07.22)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-25 23:35:30 +00:00
pycook 77bc850d4a
Merge pull request #129 from veops/dev_ui
前端更新
2023-07-25 18:19:47 +08:00
wang-liang0615 e52f201ba1 Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-07-25 13:11:03 +08:00
wang-liang0615 64aea424dc 授权高亮提示 2023-07-25 13:10:45 +08:00
pycook 0655b0e9eb add command cmdb-index-table-upgrade 2023-07-25 10:31:30 +08:00
wang-liang0615 cce0649299 style 新建属性行错乱 2023-07-25 10:18:22 +08:00
pycook 52574c64cc 废弃3个表: c_value_datetime c_value_floats c_value_integers, time类型属性值增加写入校验 2023-07-24 21:55:00 +08:00
pycook fb904b01a6 禁止删除唯一标识的属性 2023-07-21 15:58:41 +08:00
pycook 63af79ec45 Merge branch 'master' of github.com:veops/cmdb 2023-07-20 18:37:14 +08:00
pycook 38af86317a fix docker-compose 2023-07-20 18:36:32 +08:00
pycook 03bac86588
Merge pull request #127 from veops/dev_ui
fix currentValueType
2023-07-20 15:39:11 +08:00
wang-liang0615 130b68cadd fix currentValueType 2023-07-20 15:30:12 +08:00
pycook 65000f8141 更新架构图 2023-07-20 11:01:25 +08:00
pycook 23692ad50b update readme 2023-07-20 11:01:25 +08:00
pycook 16cd34e8b8 update readme 2023-07-20 11:01:25 +08:00
pycook 985f67ee47 lint 2023-07-20 11:01:25 +08:00
wang-liang0615 8d95f8d57d 角色授权 2023-07-20 11:01:25 +08:00
pycook cf6230008d 清理空间 2023-07-20 11:01:25 +08:00
songbing01249 ec97fa84d8 fix(ci_type_group_manager): fix resources issues 2023-07-20 11:01:25 +08:00
pycook 76f074704b update cmdb_api.md 2023-07-20 10:56:58 +08:00
wang-liang0615 e5addab3af 删除fullscreen相关代码 2023-07-19 17:46:27 +08:00
wang-liang0615 1c6be9e281 format 2023-07-19 15:36:46 +08:00
wang-liang0615 9552892c68 ops table getVxetableRef 2023-07-19 14:39:57 +08:00
wang-liang0615 b59e1af318 编译 acl 2023-07-19 13:52:24 +08:00
wang-liang0615 d164d883ab Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-07-18 15:16:50 +08:00
wang-liang0615 1fef160d9e 删除不必要文件 2023-07-18 15:16:32 +08:00
wang-liang0615 2e537d390a acl 样式升级 2023-07-18 15:14:35 +08:00
pycook 5b9fe15afa Merge pull request #119 from veops/dev_ui
模型属性 is_index
2023-07-17 18:15:28 +08:00
wang-liang0615 89fa5f2243 Merge branch 'master' of github.com:veops/cmdb into dev_ui 2023-07-17 17:21:26 +08:00
wang-liang0615 652a5c7fb8 模型属性 is_index 2023-07-17 17:19:44 +08:00
pycook afb6adec89 update local.md 2023-07-17 13:32:34 +08:00
pycook a9db4285ab Merge pull request #118 from veops/dev_ui
Dev UI
2023-07-14 18:06:02 +08:00
wang-liang0615 a04bdc29a5 删除usedfc 2023-07-14 15:20:58 +08:00
wang-liang0615 91e0e076a7 删除usedfc 2023-07-14 15:20:36 +08:00
wang-liang0615 339a7b857e acl 前端 2023-07-14 14:34:35 +08:00
pycook e86e5ad1fd PyJWT==2.4.0 2023-07-13 17:07:33 +08:00
pycook c50a69de77 Merge pull request #117 from lovvvve/patch-4
Update click_cmdb.py
2023-07-13 15:51:36 +08:00
lovvvve 4d16e9e6d9 Update click_cmdb.py
add-user remove --is_admin
2023-07-13 15:49:35 +08:00
pycook fcea4dcb9f Merge pull request #116 from lovvvve/patch-3
Update click_cmdb.py
2023-07-13 15:23:24 +08:00
lovvvve f98fd24c62 Update click_cmdb.py
add-user  remove --is_admin
2023-07-13 15:18:47 +08:00
pycook f10eeb8439 update README 2023-07-13 09:34:17 +08:00
pycook f070948122 Merge pull request #115 from veops/doc
change screenshot image
2023-07-12 17:22:58 +08:00
ivonGwy 4112bcf547 change screenshot image 2023-07-12 17:21:10 +08:00
pycook 2292756bf7 Merge pull request #114 from veops/doc
change image size
2023-07-12 17:11:20 +08:00
ivonGwy 93e2483974 change image size 2023-07-12 17:07:17 +08:00
pycook fbb4fcc255 Merge pull request #113 from veops/doc
add qrcode for gzh
2023-07-12 16:49:12 +08:00
ivonGwy fc77241006 add qrcode for gzh 2023-07-12 16:12:17 +08:00
pycook 0d04ad7d90 update requirements 2023-07-12 15:32:46 +08:00
pycook e6290e49ea update docker-compose 2023-07-12 11:59:51 +08:00
pycook 97aa2e0ebe remove .gitattributes 2023-07-12 11:50:05 +08:00
pycook 939d9dc3cd md format 2023-07-12 10:14:47 +08:00
pycook 576d2e3bc4 Update README.md 2023-07-12 10:09:11 +08:00
pycook 9a40246d29 Update README.md 2023-07-12 10:01:15 +08:00
pycook 044f95c3be docker-compose 构建后的默认账号密码 2023-07-11 19:40:40 +08:00
pycook a386de355e docker-compose is ok 2023-07-11 18:22:17 +08:00
pycook b93afc1790 docker-compose is ok 2023-07-11 18:12:22 +08:00
pycook 77d89677ef 前后端全面升级 2023-07-10 20:13:39 +08:00
pycook 7ec6775f03 友链Spug 2023-07-10 20:07:31 +08:00
pycook 98cc853dbc 前后端全面升级 2023-07-10 20:07:20 +08:00
pycook f57ff80099 Merge pull request #90 from lovvvve/patch-2
fix: 🐛 db search
2021-11-10 20:18:18 +08:00
lovvvve 51e4b5dd8f fix: 🐛 db search
Escape ":" character in SQLAlchemy
2021-11-10 18:56:42 +08:00
pycook dbf44a020b Merge pull request #77 from x-7/x-7-patch-1
cmdb-api:add attr check in ci_manager update method
2021-04-22 09:14:33 +08:00
x-7 8e578797ef Update ci.py
cmdb-api:add attr check in ci_manager update method
2021-04-21 19:17:05 +08:00
pycook 158de4b946 Merge pull request #69 from lovvvve/patch-1
CiManager.add and AttributeValueManager.create_or_update_attr_value update
2021-01-28 17:09:01 +08:00
lovvvve 3cf234d49e Update value.py
value type 是 int 或 float 时 value 值等于 0 是会删除 的 BUG
2021-01-28 16:57:20 +08:00
lovvvve a7debc1b3b Update ci.py
兼容 py2
2021-01-28 16:56:04 +08:00
lovvvve 9268da2ffa Update value.py 2021-01-28 16:46:19 +08:00
lovvvve cfcb092478 Update value.py
feat(AttributeValueManager.create_or_update_attr_value()): AttributeValue update skip The same value
2021-01-28 16:32:53 +08:00
lovvvve 0d8b41b64a Update value.py
feat(AttributeValueManager.create_or_update_attr_value()): AttributeValue update skip The same value
2021-01-28 16:30:17 +08:00
lovvvve d85715793f Update ci.py
feat(CiManager.add()): Check the attribute is in the ci_type attributes list
2021-01-28 16:27:56 +08:00
pycook afbdbe4682 Merge pull request #65 from shaohaojiecoder/stable
Stable
2020-12-14 09:20:15 +08:00
shaohaojiecoder e629abebb7 remove weeds 2020-12-13 16:58:06 +08:00
shaohaojiecoder 029c12365a delay render 2020-12-13 16:42:17 +08:00
pycook 4d000d9805 yarn.lock update 2020-11-22 11:45:33 +08:00
pycook f1fc66bd2c Fix github security 2020-11-22 11:42:17 +08:00
pycook d6af4af1d1 upgrade ui packages 2020-11-22 11:13:46 +08:00
pycook 7fe2bdca5f Create codeql-analysis.yml 2020-11-22 10:45:45 +08:00
pycook 1432131d2b Merge pull request #59 from pycook/dependabot/npm_and_yarn/cmdb-ui/dot-prop-4.2.1
Bump dot-prop from 4.2.0 to 4.2.1 in /cmdb-ui
2020-11-22 10:28:34 +08:00
pycook bc94d039f5 Merge pull request #52 from pycook/dependabot/npm_and_yarn/cmdb-ui/elliptic-6.5.3
Bump elliptic from 6.4.1 to 6.5.3 in /cmdb-ui
2020-11-22 10:27:49 +08:00
pycook 5abafed9c8 Merge pull request #54 from pycook/dependabot/npm_and_yarn/cmdb-ui/quill-1.3.7
Bump quill from 1.3.6 to 1.3.7 in /cmdb-ui
2020-11-22 10:27:15 +08:00
dependabot[bot] 04e249feac Bump dot-prop from 4.2.0 to 4.2.1 in /cmdb-ui
Bumps [dot-prop](https://github.com/sindresorhus/dot-prop) from 4.2.0 to 4.2.1.
- [Release notes](https://github.com/sindresorhus/dot-prop/releases)
- [Commits](https://github.com/sindresorhus/dot-prop/compare/v4.2.0...v4.2.1)

Signed-off-by: dependabot[bot] <support@github.com>
2020-11-22 02:26:41 +00:00
pycook ef3e6bc6b0 Merge pull request #55 from pycook/dependabot/npm_and_yarn/cmdb-ui/handlebars-4.7.6
Bump handlebars from 4.4.5 to 4.7.6 in /cmdb-ui
2020-11-22 10:26:38 +08:00
pycook d9d5f8f818 Merge pull request #56 from pycook/dependabot/npm_and_yarn/cmdb-ui/http-proxy-1.18.1
Bump http-proxy from 1.17.0 to 1.18.1 in /cmdb-ui
2020-11-22 10:25:58 +08:00
dependabot[bot] 578da0807c Bump http-proxy from 1.17.0 to 1.18.1 in /cmdb-ui
Bumps [http-proxy](https://github.com/http-party/node-http-proxy) from 1.17.0 to 1.18.1.
- [Release notes](https://github.com/http-party/node-http-proxy/releases)
- [Changelog](https://github.com/http-party/node-http-proxy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/http-party/node-http-proxy/compare/1.17.0...1.18.1)

Signed-off-by: dependabot[bot] <support@github.com>
2020-09-11 00:19:27 +00:00
dependabot[bot] 3eb35f5497 Bump handlebars from 4.4.5 to 4.7.6 in /cmdb-ui
Bumps [handlebars](https://github.com/wycats/handlebars.js) from 4.4.5 to 4.7.6.
- [Release notes](https://github.com/wycats/handlebars.js/releases)
- [Changelog](https://github.com/handlebars-lang/handlebars.js/blob/master/release-notes.md)
- [Commits](https://github.com/wycats/handlebars.js/compare/v4.4.5...v4.7.6)

Signed-off-by: dependabot[bot] <support@github.com>
2020-09-10 03:19:57 +00:00
dependabot[bot] 9669ad04cd Bump quill from 1.3.6 to 1.3.7 in /cmdb-ui
Bumps [quill](https://github.com/quilljs/quill) from 1.3.6 to 1.3.7.
- [Release notes](https://github.com/quilljs/quill/releases)
- [Changelog](https://github.com/quilljs/quill/blob/v1.3.7/CHANGELOG.md)
- [Commits](https://github.com/quilljs/quill/compare/v1.3.6...v1.3.7)

Signed-off-by: dependabot[bot] <support@github.com>
2020-09-04 00:38:10 +00:00
dependabot[bot] 70214807ca Bump elliptic from 6.4.1 to 6.5.3 in /cmdb-ui
Bumps [elliptic](https://github.com/indutny/elliptic) from 6.4.1 to 6.5.3.
- [Release notes](https://github.com/indutny/elliptic/releases)
- [Commits](https://github.com/indutny/elliptic/compare/v6.4.1...v6.5.3)

Signed-off-by: dependabot[bot] <support@github.com>
2020-08-01 08:41:43 +00:00
pycook 7c1c309f7a Merge pull request #51 from pycook/dependabot/npm_and_yarn/cmdb-ui/lodash-4.17.19
Bump lodash from 4.17.14 to 4.17.19 in /cmdb-ui
2020-07-21 18:17:46 +08:00
dependabot[bot] 9b9799ff5e Bump lodash from 4.17.14 to 4.17.19 in /cmdb-ui
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.14 to 4.17.19.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.14...4.17.19)

Signed-off-by: dependabot[bot] <support@github.com>
2020-07-19 03:35:06 +00:00
pycook b2578b61fa add command: add-user | del-user 2020-06-11 21:37:41 +08:00
pycook 619f47ae13 Merge pull request #49 from pycook/dependabot/npm_and_yarn/cmdb-ui/websocket-extensions-0.1.4
Bump websocket-extensions from 0.1.3 to 0.1.4 in /cmdb-ui
2020-06-08 18:37:31 +08:00
dependabot[bot] 37c5e31799 Bump websocket-extensions from 0.1.3 to 0.1.4 in /cmdb-ui
Bumps [websocket-extensions](https://github.com/faye/websocket-extensions-node) from 0.1.3 to 0.1.4.
- [Release notes](https://github.com/faye/websocket-extensions-node/releases)
- [Changelog](https://github.com/faye/websocket-extensions-node/blob/master/CHANGELOG.md)
- [Commits](https://github.com/faye/websocket-extensions-node/compare/0.1.3...0.1.4)

Signed-off-by: dependabot[bot] <support@github.com>
2020-06-07 15:54:30 +00:00
pycook ab70b2a655 Merge pull request #48 from lovvvve/master
fix(sso login): sso login redirect
2020-06-01 12:05:05 +08:00
Lovvvve c285606f4a fix(sso login): sso login redirect 2020-06-01 12:01:55 +08:00
Lovvvve 6d3611bd73 fix(sso login): sso login redirect 2020-06-01 11:48:21 +08:00
pycook 764f6a07e0 fix merge conflict 2020-05-28 20:39:47 +08:00
pycook ae8d487af4 Readme is in Chinese by default
Committer: pycook <pycook@126.com>

Author:    pycook <pycook@126.com>
2020-05-28 20:35:11 +08:00
pycook 87c6554555 update readme 2020-05-28 20:28:49 +08:00
pycook f5671c2a2a Fix: spelling mistakes 2020-05-28 20:28:49 +08:00
pycook 43ad3dfa7b release version 2.1 2020-05-28 20:28:49 +08:00
pycook 29fa17a0b8 update readme 2020-04-10 17:22:33 +08:00
pycook 5191d6ed73 Merge branch 'master' of https://github.com/pycook/cmdb 2020-04-07 18:03:03 +08:00
pycook 8348f8e7b1 Fix the judgment of app admin 2020-04-07 18:02:26 +08:00
pycook 75c48a0807 Fix: spelling mistakes 2020-04-01 21:40:51 +08:00
pycook 5b38385f7e release version 2.1 2020-04-01 21:20:47 +08:00
pycook 036e1d236b auth with ldap 2020-04-01 20:30:44 +08:00
pycook c31be0f753 UI: batch update relation 2020-04-01 11:09:41 +08:00
pycook 764d2fac3f add .eslintrc.js 2020-03-26 17:35:26 +08:00
pycook f4079e9c3e Merge pull request #42 from pycook/dependabot/npm_and_yarn/cmdb-ui/yarn-1.22.0
Bump yarn from 1.21.1 to 1.22.0 in /cmdb-ui
2020-03-26 17:29:53 +08:00
pycook 2a0ed72235 fix: delete attribute 2020-03-23 15:49:33 +08:00
dependabot[bot] 9e803ae4c7 Bump yarn from 1.21.1 to 1.22.0 in /cmdb-ui
Bumps [yarn](https://github.com/yarnpkg/yarn) from 1.21.1 to 1.22.0.
- [Release notes](https://github.com/yarnpkg/yarn/releases)
- [Changelog](https://github.com/yarnpkg/yarn/blob/master/CHANGELOG.md)
- [Commits](https://github.com/yarnpkg/yarn/compare/v1.21.1...v1.22.0)

Signed-off-by: dependabot[bot] <support@github.com>
2020-03-22 03:14:30 +00:00
pycook bebdb61adf update deps 2020-03-22 11:14:09 +08:00
pycook f49cad771b Merge pull request #41 from pycook/dependabot/npm_and_yarn/cmdb-ui/acorn-5.7.4
Bump acorn from 5.7.3 to 5.7.4 in /cmdb-ui
2020-03-15 13:06:51 +08:00
dependabot[bot] a5b4fbda40 Bump acorn from 5.7.3 to 5.7.4 in /cmdb-ui
Bumps [acorn](https://github.com/acornjs/acorn) from 5.7.3 to 5.7.4.
- [Release notes](https://github.com/acornjs/acorn/releases)
- [Commits](https://github.com/acornjs/acorn/compare/5.7.3...5.7.4)

Signed-off-by: dependabot[bot] <support@github.com>
2020-03-14 18:55:18 +00:00
pycook 2cce2d5cf2 Fix: permission management 2020-03-13 10:30:21 +08:00
pycook e720b7af66 Merge branch 'develop' of https://github.com/pycook/cmdb into develop 2020-03-10 17:03:23 +08:00
pycook 09e4a5111b Merge pull request #40 from OhBonsai/develop
chore: use wait script to hang api before cache/db/es started
2020-03-10 17:01:58 +08:00
penzai 3539b12503 chore: use wait script to hang api before cache/db/es started 2020-03-10 16:49:40 +08:00
pycook 21d8673b5d Merge pull request #38 from AngrygrayWolf/dev
Change the requirements to support python3.8
2020-03-07 22:06:25 +08:00
what 7154426dc7 Change Pipfile 2020-03-07 21:51:37 +08:00
what ca75c7dcd0 Change the requirements to support python3.8 2020-03-07 21:20:46 +08:00
pycook 194a2254a6 fix case sensitive of ES search 2020-02-28 14:32:51 +08:00
pycook 26abad14d0 Merge branch 'develop' of https://github.com/pycook/cmdb into develop 2020-02-23 23:13:59 +08:00
pycook 1521a71f9c alter table c_preference_relation_views column name varchar(64) 2020-02-23 23:13:22 +08:00
pycook d425b455f1 Merge pull request #36 from OhBonsai/develop
test: add ci and ci relation crud test cases
2020-02-23 23:12:36 +08:00
pycook 230307474b version 2.1 and update readme 2020-02-23 20:21:13 +08:00
pycook 69d6b40e39 / redirect to /relation_views 2020-02-23 18:53:28 +08:00
pycook 5dc2f89e7f fix i18n 2020-02-23 18:41:23 +08:00
pycook 9eaca4d6a0 add library future 2020-02-21 23:29:20 +08:00
pycook 3680a462f5 Remove Chinese comments 2020-02-21 23:14:26 +08:00
pycook 3ac50e7cd8 lint 2020-02-21 22:46:12 +08:00
pycook 21b2cc1d5d The resource view is made into a two-level menu 2020-02-21 22:44:10 +08:00
penzai cd5448cc7d test: add ci and ci relation crud test cases 2020-02-18 22:05:13 +08:00
pycook 10610bdb4b logo left justify 2020-02-16 19:18:51 +08:00
pycook b5c2156387 Merge pull request #35 from shaohaojiecoder/i18n
I18n
2020-02-16 19:06:58 +08:00
pycook b05ae0d1a7 Merge pull request #34 from OhBonsai/develop
add test cases
2020-02-16 19:06:24 +08:00
shaohaojiecoder bbf6138d43 Merge branch 'develop' into i18n 2020-02-16 18:19:09 +08:00
shaohaojiecoder 1ba3e6a680 add a log pic 2020-02-16 18:18:15 +08:00
penzai 64045c1f93 test: add some test cases 2020-02-16 18:03:33 +08:00
penzai 5a3e55813c model: allow origin and ticket_id nullable in OperationRecord 2020-02-16 18:03:08 +08:00
penzai bc72e58886 auth: add user in flask.g when auth by jwt 2020-02-16 18:02:24 +08:00
pycook 9e78955ba1 ACL i18n 2020-02-16 17:36:03 +08:00
pycook 136853d9a4 Merge pull request #33 from shaohaojiecoder/i18n
fix local storage for defalut lang
2020-02-16 14:51:59 +08:00
pycook 036e3ad00d modeling i18n 2020-02-16 14:50:17 +08:00
shaohaojiecoder 5ce6c93237 fix local storage for defalut lang 2020-02-16 13:47:30 +08:00
pycook 43dba7f7ed Merge pull request #32 from OhBonsai/develop
fix: recycle import by celery task
2020-02-16 10:38:27 +08:00
penzai f4879d20d6 fix: recycle import by celery task 2020-02-16 09:39:33 +08:00
pycook 740e4c6034 i18n 2020-02-15 20:57:47 +08:00
pycook 0f2baa1d94 Merge pull request #31 from shaohaojiecoder/i18n
I18n
2020-02-11 09:50:50 +08:00
pycook 405b0af72c Merge branch 'develop' into i18n 2020-02-11 09:50:11 +08:00
shaohaojiecoder a4e5178979 fix meta title 2020-02-09 19:50:23 +08:00
shaohaojiecoder c14fe23283 add i18n basic structure 2020-02-09 17:54:57 +08:00
shaohaojiecoder b3a058f908 add something 2020-02-09 17:22:17 +08:00
shaohaojiecoder bd82a0e27c add some 2020-02-08 22:27:56 +08:00
pycook f22a5c3543 Define display fields 2020-02-08 17:39:42 +08:00
pycook ed81c3f091 Define display fields 2020-02-08 17:36:54 +08:00
shaohaojiecoder 07814b85f9 add basic 2020-02-07 22:05:52 +08:00
pycook db52b28d6b fix jwt decode 2020-02-06 09:59:24 +08:00
pycook fc85ba21c8 Merge pull request #29 from OhBonsai/master
fix ci_type_attr_group update bug and add ci_type test case
2020-02-04 12:47:15 +08:00
Bonsai 6c5ee3fcd9 Merge pull request #2 from pycook/master
merge from main repo
2020-02-04 10:56:03 +08:00
penzai 40f1ef88a9 test: add ci_type test cases 2020-02-04 10:54:16 +08:00
penzai bce422ffc8 fix: update attribute group without name params will fail. #tests/test_cmdb_ci_type.py::test_update_attribute_group_ci_type 2020-02-04 10:53:07 +08:00
pycook 7c79066532 Merge branch 'master' of https://github.com/pycook/cmdb 2020-01-19 18:18:43 +08:00
pycook 1129ac93fb Merge pull request #28 from shaohaojiecoder/master
fix drag group and attrs
2020-01-19 18:18:23 +08:00
haojie.shao 5ab0e7e737 fix drag group and attrs 2020-01-19 18:14:53 +08:00
pycook 23319c7417 /ci_types/<int:type_id>/attributes/transfer and /ci_types/<int:type_id>/attribute_groups/transfer 2020-01-19 17:59:32 +08:00
pycook c74f85cabb Merge pull request #26 from OhBonsai/master
test: add basic test code and attribute create api test case
2020-01-17 15:34:02 +08:00
penzai fce2b689fb Merge remote-tracking branch 'origin/master' 2020-01-17 15:10:16 +08:00
penzai 105327bb0c test: add basic test code and attribute create api test case 2020-01-17 15:08:46 +08:00
Bonsai 745c43d0a4 Merge pull request #1 from pycook/master
merge master
2020-01-17 10:05:05 +08:00
pycook 3130d94568 [fix] cycle import 2020-01-15 11:52:33 +08:00
pycook 04a66eb239 flush cache when delete attribute 2020-01-15 09:06:31 +08:00
pycook 68390ec6f1 [fix] delete CIType's attribute 2020-01-14 20:52:36 +08:00
pycook 17392be138 api docs update 2020-01-06 21:58:06 +08:00
pycook f2fdb29221 api docs update 2020-01-06 21:54:33 +08:00
pycook 4a18698423 update README 2019-12-31 22:53:21 +08:00
pycook 95ccee04f9 update README 2019-12-31 22:50:01 +08:00
pycook b60628247b Update README.md
update
2019-12-31 22:37:36 +08:00
pycook a6d7699ab4 Merge pull request #24 from fxiang21/master
add Readme of English
2019-12-31 22:32:48 +08:00
fxiang21 4b21bcc438 add Readme of English 2019-12-31 22:25:42 +08:00
pycook 33dce2f0f3 Update README.md
[fix] flask db-setup
2019-12-31 11:06:59 +08:00
pycook d43b827fe5 [fix] fuzzy search 2019-12-25 13:36:43 +08:00
pycook aec8bade41 [fix] security alerts 2019-12-25 10:19:03 +08:00
pycook 89ae89a449 [fix] validate attribute is required 2019-12-24 20:37:32 +08:00
pycook 945f90e386 disable eslint warning 2019-12-24 15:15:03 +08:00
pycook 2ba6a16613 support JSON type 2019-12-23 18:51:33 +08:00
pycook 6089039366 fix sidebar menu in mobile 2019-12-23 11:58:41 +08:00
pycook e1e5307084 add yarn.lock 2019-12-23 11:27:47 +08:00
pycook 2ff7fce9dd flask init-acl 2019-12-20 12:57:39 +09:00
pycook fc4d3e0c1a update makefile 2019-12-18 23:36:58 +09:00
pycook f66a94712e Modify code organization 2019-12-18 23:33:22 +09:00
pycook 24664c7686 catch abort exception when getting relation views 2019-12-13 09:59:38 +08:00
pycook 1d668bab6e update 2019-12-12 21:45:19 +08:00
pycook 3d4b84909e fix delete relation view 2019-12-12 21:36:33 +08:00
pycook 8341e742eb [fix] update attribute which is list 2019-12-11 18:12:10 +08:00
pycook a71ba83de0 release 2.0 2019-12-11 12:43:55 +08:00
pycook 9668131c18 V2.0 2019-12-11 12:14:23 +08:00
pycook 4a744dcad9 fix relation tree 2019-12-10 15:35:59 +08:00
pycook 2a420225e2 Merge pull request #22 from lovvvve/FixDelCi_type
fix(ci_type api): fix the judgment condition of deleting ci_type
2019-12-10 14:41:27 +08:00
Lovvvve ff67785618 fix(ci_type api): fix the judgment condition of deleting ci_type 2019-12-10 14:31:27 +08:00
pycook dfe1ba55d5 sidebar scroll 2019-12-09 17:16:38 +08:00
pycook 90b1b6b7af fix relation view 2019-12-09 12:03:58 +08:00
pycook d5fbe42ed7 relation view bugfix 2019-12-08 00:20:55 +08:00
pycook f424ad6864 acl done and bugfix 2019-12-06 22:33:31 +08:00
pycook 16b724bd40 ACL: permission management [doing] 2019-12-04 18:14:09 +08:00
pycook f70ed54cad update readme 2019-12-04 09:26:01 +08:00
pycook dd64564160 remove print 2019-12-03 22:13:14 +08:00
pycook cc2cdbcc9f fix delete ci relation 2019-12-03 21:57:44 +08:00
pycook 81fe850627 fix get second cis api 2019-12-03 20:10:27 +08:00
pycook 487d9f76f6 关系视图定义支持两只方式 2019-12-03 19:54:01 +08:00
pycook 92dd4c5dfe relation view has been optimised 2019-12-03 19:10:54 +08:00
pycook 8ee7c6daf8 version 1.5: update docker file 2019-11-30 23:07:12 +08:00
pycook 882b158d18 cmdb.sql update 2019-11-29 22:21:41 +08:00
pycook 85222443c0 relation view [done] 2019-11-29 18:11:18 +08:00
pycook 1696ecf49d relation view [doing] 2019-11-28 21:17:06 +08:00
pycook 73b92ff533 relation view define [done] 2019-11-27 18:25:53 +08:00
pycook e977bb15a5 GPLv2 2019-11-25 20:35:05 +08:00
pycook 7c46d6cdbf change to GPLv2 2019-11-25 20:33:56 +08:00
pycook 4d11c1f7db License change to GPLv3 2019-11-25 19:42:37 +08:00
pycook 0a563deb11 UI: relation type define [done] 2019-11-25 19:23:51 +08:00
pycook ba80ec4403 /acl/resources add param resource_type_id 2019-11-24 22:33:57 +08:00
pycook 3b7cc4595b fix grant 2019-11-24 22:29:51 +08:00
pycook 9fe47657a6 Merge pull request #20 from kdyq007/master
[更新] 新增角色、资源、权限页面
2019-11-24 17:29:58 +08:00
kdyq007 5a4a6caa07 Merge branch 'master' of https://github.com/kdyq007/cmdb 2019-11-24 17:21:27 +08:00
kdyq007 9dadbe1599 Merge pull request #6 from pycook/master
fix acl api
2019-11-24 16:43:53 +08:00
pycook 40d016f513 fix acl api 2019-11-24 16:35:28 +08:00
kdyq007 655edaa7c8 [更新] 完成权限管理 2019-11-24 15:40:38 +08:00
kdyq007 7fa5cff919 [更新] 完成权限管理页面 2019-11-24 15:22:18 +08:00
kdyq007 d19834ed5d Merge pull request #5 from pycook/master
同步
2019-11-23 21:53:46 +08:00
pycook b6be430aa3 fix 2019-11-23 21:50:45 +08:00
kdyq007 63792c242f [更新] 完成资源类型页面 2019-11-23 20:16:31 +08:00
kdyq007 10f7029722 [保存] 完成资源类型权限显示 2019-11-23 18:08:52 +08:00
pycook ba176542dc fix acl resource 2019-11-23 17:42:33 +08:00
kdyq007 aae3b6e2ff Merge pull request #4 from pycook/master
fix acl resource_type
2019-11-23 17:36:42 +08:00
pycook b370c7d46e fix acl resource_type 2019-11-23 17:24:43 +08:00
kdyq007 efa5a8ea5d Merge pull request #3 from pycook/master
同步
2019-11-23 14:52:41 +08:00
pycook fd532626ac relative view api [done] 2019-11-22 18:18:22 +08:00
pycook 617337c614 Realize /api/v0.1/ci_relations/s [done] 2019-11-21 18:21:03 +08:00
kdyq007 9a3d24ac81 [更新] 保存一下 2019-11-20 19:02:36 +08:00
kdyq007 454dd4c56b Merge branch 'master' of https://github.com/kdyq007/cmdb 2019-11-19 21:52:33 +08:00
kdyq007 88ad72d4dc Merge pull request #2 from pycook/master
update
2019-11-19 21:52:02 +08:00
kdyq007 8d1517d550 [更新] 完成基础role和user管理 2019-11-19 21:49:51 +08:00
pycook d3a8ef5966 fix get user by uid 2019-11-19 21:46:53 +08:00
pycook e5baa5012d acl: resource type api 2019-11-19 21:41:46 +08:00
pycook a1f63b00dd fix search 2019-11-19 18:32:35 +08:00
pycook 47ded84231 elastic search [done] 2019-11-19 18:16:31 +08:00
kdyq007 224a48a5f3 [更新] 去除app_id 2019-11-18 22:22:38 +08:00
pycook 0e7c52df71 es search update 2019-11-18 22:05:59 +08:00
pycook ff701cc770 search by elasticsearch [doing] 2019-11-18 20:02:25 +08:00
kdyq007 6a7bb725cc Merge pull request #1 from pycook/master
怎么玩的?反向pull request
2019-11-18 18:31:14 +08:00
pycook 0a13186c13 fix acl api 2019-11-18 12:02:02 +08:00
kdyq007 a0ffeb9950 [更新] 完成角色管理页面 2019-11-17 21:08:04 +08:00
kdyq007 6c70ec6d53 [更新] 完成roles基本接口 2019-11-17 17:09:24 +08:00
qiqi 4b5f82699a [更新] 完成用户管理页面 2019-11-17 09:32:39 +08:00
pycook f78c3b928b pep8 2019-11-15 18:03:06 +08:00
pycook 332659c1d5 update acl 2019-11-15 16:54:56 +08:00
pycook 3beb2706dc Merge pull request #18 from kdyq007/master
[更新] 修改图片路径、压缩图片
2019-11-14 21:59:38 +08:00
qiqi a14111e1ce [更新] 优化格式 2019-11-14 21:51:58 +08:00
qiqi c4320c14f9 [更新] 更换图片位置、压缩图片 2019-11-14 21:48:36 +08:00
qiqi 4c5442748f [更新] 优化说明文件格式 2019-11-14 21:00:24 +08:00
qiqi a81750acba [更新] 新增Q群 README.md 2019-11-14 20:55:48 +08:00
pycook 0439e2462b update acl 2019-11-14 18:35:31 +08:00
pycook 3b62bd7ac9 update readme 2019-11-13 14:02:02 +08:00
pycook f6add52721 python3.7 timezone fix 2019-11-13 13:56:44 +08:00
pycook c85e535288 update acl 2019-11-13 13:25:42 +08:00
pycook c0c6d116b5 docker images use aliyun 2019-11-13 11:56:17 +08:00
pycook 39153e92d1 update Makefile and support for install by make 2019-11-12 11:55:04 +08:00
pycook 42bcc2e510 fix py3 2019-11-12 11:15:25 +08:00
pycook 398fbb25dc merge Dockerfile 2019-11-12 10:40:37 +08:00
pycook 4b312d4f99 delete docs/Dockerfile 2019-11-11 23:12:50 +08:00
pycook 10414155a5 fix timezone 2019-11-11 23:11:12 +08:00
pycook feda0c37e7 update README 2019-11-11 16:10:02 +08:00
pycook 173c120b64 flask init-cache 2019-11-11 15:46:57 +08:00
pycook 5f2a0d1a7b Remove package-lock.json and remove some compile warnings 2019-11-11 13:16:07 +08:00
pycook 50f894a01d add command init-cache 2019-11-11 11:27:43 +08:00
pycook 66e93e73af Merge branch 'master' of https://github.com/pycook/cmdb 2019-11-11 09:20:07 +08:00
pycook 58ad9d3f05 vue lint 2019-11-11 00:25:22 +08:00
pycook 08c96039e9 gunicorn==19.5.0 2019-11-10 19:10:23 +08:00
pycook ca0dd97626 Docker to production 2019-11-10 19:06:38 +08:00
pycook 7810ee3974 Partially completed backend development of permissions management 2019-11-08 17:42:13 +08:00
pycook 2cfea7ef08 Update README.md
docker 一键安装说明补充
2019-11-08 15:26:22 +08:00
pycook 0cee6cea25 fix py2.7 unicode encoding error 2019-11-08 15:15:31 +08:00
pycook 5d13ba2f26 users drop is_admin 2019-11-08 14:58:21 +08:00
pycook a583433530 fix unicode encode error 2019-11-08 14:37:53 +08:00
fxiang21 733ac3b2b4 移除多余的docker-start目录 2019-11-08 09:20:34 +08:00
fxiang21 ef6300255a 修复nginx转发问题 2019-11-08 09:20:27 +08:00
fxiang21 aad37dcf0b 添加容器化部署方式 2019-11-08 09:20:09 +08:00
pycook cce10d39ea code format 2019-11-07 19:18:31 +08:00
pycook c521dd447e Update README.md
pipenv run flask run -h 0.0.0.0
2019-11-05 17:52:45 +08:00
pycook 4d0cd4ba56 Update README.md
如果是非本机访问, 要修改ui/.env里VUE_APP_API_BASE_URL里的IP地址
2019-11-05 17:44:48 +08:00
pycook 7291274cb1 Update README.md
如果是非本机访问, 要修改ui/.env里VUE_APP_API_BASE_URL里的IP地址
2019-11-05 17:43:40 +08:00
pycook 44f2e383c3 update overview jpeg url 2019-11-01 11:37:16 +08:00
pycook 1f8219b418 fix add integer list 2019-11-01 11:27:24 +08:00
pycook cb2f170ded mkdir logs, ignore *.log 2019-11-01 10:45:35 +08:00
pycook 1241a23ba8 Update README.md
add cmdb.sql
2019-10-28 21:48:46 +08:00
pycook 7d7744b7dc add docs/cmdb.sql 2019-10-28 21:46:41 +08:00
pycook 9c7d51127a fix date picker 2019-10-24 20:43:59 +08:00
pycook b5a987f6b4 choice value tip fix 2019-10-24 20:43:58 +08:00
pycook 7bbc68bfd5 fix delete ci type 2019-10-24 20:43:58 +08:00
pycook 99d11e11ce fix ci types show 2019-10-24 20:43:58 +08:00
pycook 7b96ac4638 attribute alias must be unique 2019-10-24 20:43:58 +08:00
pycook 0a36330852 fix attributes paginate 2019-10-24 20:43:54 +08:00
pycook 9105f92c82 update README 2019-10-24 20:43:51 +08:00
pycook 57541ab486 Update README.md
create tables fix
2019-10-24 20:43:51 +08:00
pycook a0fcbd220e attributes paginate and fix update value 2019-10-24 20:43:51 +08:00
pycook d54b404eb6 add docs 2019-10-24 20:43:51 +08:00
pycook 620c5bb5eb ci search return unique key 2019-10-24 20:43:51 +08:00
pycook 0fde1d699d invalid username or password -> 403 2019-10-24 20:43:51 +08:00
shaohaojiecoder 61f77cf311 add batch module 2019-10-24 20:43:51 +08:00
lilixiang 13476128d5 add 添加属性库和模型模块 2019-10-24 20:43:51 +08:00
pycook 5cdb4ecd2a Revert "add 添加属性库和模型模块" 2019-10-24 20:43:51 +08:00
lilixiang 64c3b9da3b add 添加属性库和模型模块 2019-10-24 20:43:51 +08:00
pycook 55dad7a58c cache强制unicode编码 2019-08-30 09:46:24 +08:00
pycook 38dabc35e5 add .gitattributes 2019-08-28 21:08:28 +08:00
pycook 5b4f95a50e add ui 2019-08-28 20:51:51 +08:00
pycook f3046d3c91 remove ui 2019-08-28 20:48:23 +08:00
pycook 5faae9af67 remove ui 2019-08-28 20:48:04 +08:00
pycook c0b50642e0 update README 2019-08-28 20:45:59 +08:00
pycook 12ca296879 升级后端并开源UI 2019-08-28 20:34:10 +08:00
pycook 420c6cea2b delete。。。 2016-08-26 13:46:03 +08:00
pycook ccc4bb48fa pep8 2016-06-27 10:50:32 +08:00
651 changed files with 136125 additions and 5417 deletions

1
.gitattributes vendored Normal file
View File

@ -0,0 +1 @@
*.vue linguist-language=python

67
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@ -0,0 +1,67 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ master ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '20 3 * * 2'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

88
.gitignore vendored
View File

@ -1,21 +1,77 @@
*~
*.pyc
.idea
data
logs/*
*.sql
test/*
tools/*
cmdb_agent/*
.vscode
migrates
config.cfg
*.log
*_packed.js
*_packed.css
*.orig
*.zip
*.swp
config.cfg
*.tar.gz
core/special.py
lib/special
lib/audit*
templates/*audit*
codeLin*
lib/spec_*
nohup.out
.DS_Store
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
#lib
#lib64
Pipfile.lock
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.pytest_cache
cmdb-api/test-output
cmdb-api/api/uploaded_files
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
docs/_build
# Virtualenvs
env/
# Configuration
settings.py
# Development database
*.db
# UI
cmdb-ui/node_modules
cmdb-ui/dist
# Log files
cmdb-ui/npm-debug.log*
cmdb-ui/yarn-debug.log*
cmdb-ui/yarn-error.log*
cmdb-ui/package-lock.json

849
LICENSE
View File

@ -1,281 +1,620 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 1989, 1991 Free Software Foundation, Inc., <http://fsf.org/>
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
TERMS AND CONDITIONS
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
0. Definitions.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
"This License" refers to version 3 of the GNU Affero General Public License.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
A "covered work" means either the unmodified Program or a work based
on the Program.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
1. Source Code.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
The Corresponding Source for a work in source code form is that
same work.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
2. Basic Permissions.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
13. Remote Network Interaction; Use with the GNU General Public License.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
14. Revised Versions of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
NO WARRANTY
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
15. Disclaimer of Warranty.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
@ -287,54 +626,36 @@ free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
{description}
Copyright (C) {year} {fullname}
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
GNU Affero General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
{signature of Ty Coon}, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

52
Makefile Normal file
View File

@ -0,0 +1,52 @@
MYSQL_ROOT_PASSWORD ?= root
MYSQL_PORT ?= 3306
REDIS_PORT ?= 6379
default: help
help: ## display this help
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z0-9_-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
.PHONY: help
env: ## create a development environment using pipenv
sudo easy_install pip && \
pip install pipenv -i https://pypi.douban.com/simple && \
npm install yarn && \
make deps
.PHONY: env
docker-mysql: ## deploy MySQL use docker
@docker run --name mysql -p ${MYSQL_PORT}:3306 -e MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} -d mysql:latest
.PHONY: docker-mysql
docker-redis: ## deploy Redis use docker
@docker run --name redis -p ${REDIS_PORT}:6379 -d redis:latest
.PHONY: docker-redis
deps: ## install dependencies using pip
cd cmdb-api && \
pipenv install --dev && \
pipenv run flask db-setup && \
pipenv run flask cmdb-init-cache && \
cd .. && \
cd cmdb-ui && yarn install && cd ..
.PHONY: deps
api: ## start api server
cd cmdb-api && pipenv run flask run -h 0.0.0.0
.PHONY: api
worker: ## start async tasks worker
cd cmdb-api && pipenv run celery -A celery_worker.celery worker -E -Q one_cmdb_async --concurrency=1 -D && pipenv run celery -A celery_worker.celery worker -E -Q acl_async --concurrency=1 -D
.PHONY: worker
ui: ## start ui server
cd cmdb-ui && yarn run serve
.PHONY: ui
clean: ## remove unwanted files like .pyc's
pipenv run flask clean
.PHONY: clean
lint: ## check style with flake8
flake8 --exclude=env .
.PHONY: lint

View File

@ -1,4 +1,88 @@
## cmdb
![维易开源CMDB](docs/images/logo.png)
### cmdb即配置管理数据库
### 该部分为APIPortal即将单独开源
[![License](https://img.shields.io/badge/License-AGPLv3-brightgreen)](https://github.com/veops/cmdb/blob/master/LICENSE)
[![UI](https://img.shields.io/badge/UI-Ant%20Design%20Pro%20Vue-brightgreen)](https://github.com/sendya/ant-design-pro-vue)
[![API](https://img.shields.io/badge/API-Flask-brightgreen)](https://github.com/pallets/flask)
[English](docs/README_en.md) / [中文](README.md)
- 产品文档https://veops.cn/docs/
- 在线体验: <a href="https://cmdb.veops.cn" target="_blank">CMDB</a>
- username: demo 或者 admin
- password: 123456
> **重要提示**: `master` 分支在开发过程中可能处于 _不稳定的状态_
> 请通过[releases](https://github.com/veops/cmdb/releases)获取
## 系统介绍
### 整体架构
<img src=docs/images/view.jpg />
### 相关文档
- <a href="https://mp.weixin.qq.com/s/v3eANth64UBW5xdyOkK3tg" target="_blank">概要设计</a>
- <a href="https://github.com/veops/cmdb/tree/master/docs/cmdb_api.md" target="_blank">API 文档</a>
- <a href="https://mp.weixin.qq.com/s/rQaf4AES7YJsyNQG_MKOLg" target="_blank">自动发现</a>
### 特点
- 灵活性
1. 规范并统一纳管复杂数据资产
2. 自动发现、入库 IT 资产
- 安全性
1. 细粒度访问控制
2. 完备操作日志
- 多应用
1. 丰富视图展示维度
2. 提供 Restful API
3. 支持定义属性触发器、计算属性
### 主要功能
- 模型属性支持索引、多值、默认排序、字体颜色,支持计算属性
- 支持自动发现、定时巡检、文件导入
- 支持资源、树形、关系视图展示
- 支持模型间关系配置和展示
- 细粒度访问控制,完备的操作日志
- 支持跨模型搜索
### 系统概览
- 服务树
![服务树](docs/images/0.png "首页展示")
[查看更多展示](docs/screenshot.md)
### 更多功能
> 也欢迎移步[维易科技官网](https://veops.cn),发现更多免费运维系统。
## 接入公司
> 欢迎使用开源CMDB的公司在 [#112](https://github.com/veops/cmdb/issues/112) 登记
## 安装
### Docker 一键快速构建
- 进入主目录(先安装 docker 环境)
```
docker-compose up -d
```
- 浏览器打开: [http://127.0.0.1:8000](http://127.0.0.1:8000)
- username: demo 或者 admin
- password: 123456
### [本地开发环境搭建](docs/local.md)
### [Makefile 安装](docs/makefile.md)
---
_**欢迎关注我们的公众号点击联系我们加入微信、QQ群(336164978),获得更多产品、行业相关资讯**_
![公众号: 维易科技OneOps](docs/images/qrcode_for_gzh.jpg)

View File

@ -1,120 +0,0 @@
# -*- coding: utf-8 -*-
import os
import logging
from logging.handlers import SMTPHandler
from logging.handlers import TimedRotatingFileHandler
from flask import Flask
from flask import request
from flask import g
from flask.ext.babel import Babel
from flask.ext.principal import identity_loaded
from flask.ext.principal import Principal
import core
from extensions import db
from extensions import mail
from extensions import cache
from extensions import celery
from extensions import rd
from models.account import User
from lib.template import filters
APP_NAME = "CMDB-API"
MODULES = (
(core.attribute, "/api/v0.1/attributes"),
(core.citype, "/api/v0.1/citypes"),
(core.cityperelation, "/api/v0.1/cityperelations"),
(core.cirelation, "/api/v0.1/cirelations"),
(core.ci, "/api/v0.1/ci"),
(core.history, "/api/v0.1/history"),
(core.account, "/api/v0.1/accounts"),
(core.special, ""),
)
def make_app(config=None, modules=None):
if not modules:
modules = MODULES
app = Flask(APP_NAME)
app.config.from_pyfile(config)
configure_extensions(app)
configure_i18n(app)
configure_identity(app)
configure_blueprints(app, modules)
configure_logging(app)
configure_template_filters(app)
return app
def configure_extensions(app):
db.app = app
db.init_app(app)
mail.init_app(app)
cache.init_app(app)
celery.init_app(app)
rd.init_app(app)
def configure_i18n(app):
babel = Babel(app)
@babel.localeselector
def get_locale():
accept_languages = app.config.get('ACCEPT_LANGUAGES', ['en', 'zh'])
return request.accept_languages.best_match(accept_languages)
def configure_modules(app, modules):
for module, url_prefix in modules:
app.register_module(module, url_prefix=url_prefix)
def configure_blueprints(app, modules):
for module, url_prefix in modules:
app.register_blueprint(module, url_prefix=url_prefix)
def configure_identity(app):
principal = Principal(app)
@identity_loaded.connect_via(app)
def on_identity_loaded(sender, identity):
g.user = User.query.from_identity(identity)
def configure_logging(app):
hostname = os.uname()[1]
mail_handler = SMTPHandler(
app.config['MAIL_SERVER'],
app.config['DEFAULT_MAIL_SENDER'],
app.config['ADMINS'],
'[%s] CMDB API error' % hostname,
(
app.config['MAIL_USERNAME'],
app.config['MAIL_PASSWORD'],
)
)
mail_formater = logging.Formatter(
"%(asctime)s %(levelname)s %(pathname)s %(lineno)d\n%(message)s")
mail_handler.setFormatter(mail_formater)
mail_handler.setLevel(logging.ERROR)
if not app.debug:
app.logger.addHandler(mail_handler)
formatter = logging.Formatter(
"%(asctime)s %(levelname)s %(pathname)s %(lineno)d - %(message)s")
log_file = app.config['LOG_PATH']
file_handler = TimedRotatingFileHandler(
log_file, when='d', interval=1, backupCount=7)
file_handler.setLevel(getattr(logging, app.config['LOG_LEVEL']))
file_handler.setFormatter(formatter)
app.logger.addHandler(file_handler)
app.logger.setLevel(getattr(logging, app.config['LOG_LEVEL']))
def configure_template_filters(app):
for name in dir(filters):
if callable(getattr(filters, name)):
app.add_template_filter(getattr(filters, name))

7
cmdb-api/.env Normal file
View File

@ -0,0 +1,7 @@
# Environment variable overrides for local development
FLASK_APP=autoapp.py
FLASK_DEBUG=1
FLASK_ENV=development
GUNICORN_WORKERS=2
LOG_LEVEL=debug
SECRET_KEY='<YourSecretKey>'

77
cmdb-api/Pipfile Normal file
View File

@ -0,0 +1,77 @@
[[source]]
url = "https://mirrors.aliyun.com/pypi/simple"
verify_ssl = true
name = "pypi"
[packages]
# Flask
Flask = "==2.3.2"
Werkzeug = "==2.3.6"
click = ">=5.0"
# Api
Flask-RESTful = "==0.3.10"
# Database
Flask-SQLAlchemy = "==2.5.0"
SQLAlchemy = "==1.4.49"
PyMySQL = "==1.1.0"
redis = "==4.6.0"
# Migrations
Flask-Migrate = "==2.5.2"
# Deployment
gunicorn = "==21.0.1"
supervisor = "==4.0.3"
# Auth
Flask-Login = "==0.6.2"
Flask-Bcrypt = "==1.0.1"
Flask-Cors = ">=3.0.8"
python-ldap = "==3.4.0"
pycryptodome = "==3.12.0"
# Caching
Flask-Caching = ">=1.0.0"
# Environment variable parsing
environs = "==4.2.0"
marshmallow = "==2.20.2"
# async tasks
celery = "==5.3.1"
celery_once = "==3.0.1"
more-itertools = "==5.0.0"
kombu = "==5.3.1"
# common setting
timeout-decorator = "==0.5.0"
WTForms = "==3.0.0"
email-validator = "==1.3.1"
treelib = "==1.6.1"
flasgger = "==0.9.5"
Pillow = "==9.3.0"
# other
six = "==1.16.0"
bs4 = ">=0.0.1"
toposort = ">=1.5"
requests = ">=2.22.0"
requests_oauthlib = "==1.3.1"
markdownify = "==0.11.6"
PyJWT = "==2.4.0"
elasticsearch = "==7.17.9"
future = "==0.18.3"
itsdangerous = "==2.1.2"
Jinja2 = "==3.1.2"
jinja2schema = "==0.1.4"
msgpack-python = "==0.5.6"
alembic = "==1.7.7"
[dev-packages]
# Testing
pytest = "==4.6.5"
WebTest = "==2.0.33"
factory-boy = "==2.12.*"
pdbpp = "==0.10.0"
# Lint and code style
flake8 = "==3.7.7"
flake8-blind-except = "==0.1.1"
flake8-debugger = "==3.1.0"
flake8-docstrings = "==1.3.0"
flake8-isort = "==2.7.0"
isort = "==4.3.21"
pep8-naming = "==0.8.2"
pydocstyle = "==3.0.0"

1
cmdb-api/api/__init__.py Normal file
View File

@ -0,0 +1 @@
# -*- coding:utf-8 -*-

202
cmdb-api/api/app.py Normal file
View File

@ -0,0 +1,202 @@
# -*- coding: utf-8 -*-
"""The app module, containing the app factory function."""
import datetime
import decimal
import logging
import os
import sys
from inspect import getmembers
from logging.handlers import RotatingFileHandler
from flask import Flask
from flask import jsonify
from flask import make_response
from flask.blueprints import Blueprint
from flask.cli import click
from flask.json.provider import DefaultJSONProvider
import api.views.entry
from api.extensions import (bcrypt, cache, celery, cors, db, es, login_manager, migrate, rd)
from api.flask_cas import CAS
from api.models.acl import User
HERE = os.path.abspath(os.path.dirname(__file__))
PROJECT_ROOT = os.path.join(HERE, os.pardir)
@login_manager.user_loader
def load_user(user_id):
"""Load user by ID."""
return User.get_by_id(int(user_id))
class ReverseProxy(object):
"""Wrap the application in this middleware and configure the
front-end server to add these headers, to let you quietly bind
this to a URL other than / and to an HTTP scheme that is
different than what is used locally.
In nginx:
location /myprefix {
proxy_pass http://192.168.0.1:5001;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Script-Name /myprefix;
}
:param app: the WSGI application
"""
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
script_name = environ.get('HTTP_X_SCRIPT_NAME', '')
if script_name:
environ['SCRIPT_NAME'] = script_name
path_info = environ['PATH_INFO']
if path_info.startswith(script_name):
environ['PATH_INFO'] = path_info[len(script_name):]
scheme = environ.get('HTTP_X_SCHEME', '')
if scheme:
environ['wsgi.url_scheme'] = scheme
return self.app(environ, start_response)
class MyJSONEncoder(DefaultJSONProvider):
def default(self, o):
if isinstance(o, (decimal.Decimal, datetime.date, datetime.time)):
return str(o)
if isinstance(o, datetime.datetime):
return o.strftime('%Y-%m-%d %H:%M:%S')
return o
def create_acl_app(config_object="settings"):
app = Flask(__name__.split(".")[0])
app.config.from_object(config_object)
register_extensions(app)
return app
def create_app(config_object="settings"):
"""Create application factory, as explained here: http://flask.pocoo.org/docs/patterns/appfactories/.
:param config_object: The configuration object to use.
"""
app = Flask(__name__.split(".")[0])
app.config.from_object(config_object)
app.json = MyJSONEncoder(app)
configure_logger(app)
register_extensions(app)
register_blueprints(app)
register_error_handlers(app)
register_shell_context(app)
register_commands(app)
CAS(app)
app.wsgi_app = ReverseProxy(app.wsgi_app)
configure_upload_dir(app)
return app
def configure_upload_dir(app):
upload_dir = app.config.get('UPLOAD_DIRECTORY', 'uploaded_files')
common_setting_path = os.path.join(HERE, upload_dir)
for path in [common_setting_path]:
if not os.path.exists(path):
app.logger.warning(f"{path}, not exist, create...")
os.makedirs(path)
app.config['UPLOAD_DIRECTORY_FULL'] = common_setting_path
def register_extensions(app):
"""Register Flask extensions."""
bcrypt.init_app(app)
cache.init_app(app)
db.init_app(app)
cors.init_app(app)
login_manager.init_app(app)
migrate.init_app(app, db)
rd.init_app(app)
if app.config.get('USE_ES'):
es.init_app(app)
app.config.update(app.config.get("CELERY"))
celery.conf.update(app.config)
def register_blueprints(app):
for item in getmembers(api.views.entry):
if item[0].startswith("blueprint") and isinstance(item[1], Blueprint):
app.register_blueprint(item[1])
def register_error_handlers(app):
"""Register error handlers."""
def render_error(error):
"""Render error template."""
import traceback
app.logger.error(traceback.format_exc())
error_code = getattr(error, "code", 500)
if not str(error_code).isdigit():
error_code = 400
return make_response(jsonify(message=str(error)), error_code)
for errcode in app.config.get("ERROR_CODES") or [400, 401, 403, 404, 405, 500, 502]:
app.errorhandler(errcode)(render_error)
app.handle_exception = render_error
def register_shell_context(app):
"""Register shell context objects."""
def shell_context():
"""Shell context objects."""
return {"db": db, "User": User}
app.shell_context_processor(shell_context)
def register_commands(app):
"""Register Click commands."""
for root, _, files in os.walk(os.path.join(HERE, "commands")):
for filename in files:
if not filename.startswith("_") and filename.endswith("py"):
if root not in sys.path:
sys.path.insert(1, root)
command = __import__(os.path.splitext(filename)[0])
func_list = [o[0] for o in getmembers(command) if isinstance(o[1], click.core.Command)]
for func_name in func_list:
app.cli.add_command(getattr(command, func_name))
def configure_logger(app):
"""Configure loggers."""
handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter(
"%(asctime)s %(levelname)s %(pathname)s %(lineno)d - %(message)s")
if app.debug:
handler.setFormatter(formatter)
app.logger.addHandler(handler)
log_file = app.config['LOG_PATH']
file_handler = RotatingFileHandler(log_file,
maxBytes=2 ** 30,
backupCount=7)
file_handler.setLevel(getattr(logging, app.config['LOG_LEVEL']))
file_handler.setFormatter(formatter)
app.logger.addHandler(file_handler)
app.logger.setLevel(getattr(logging, app.config['LOG_LEVEL']))

View File

@ -1,4 +1 @@
# -*- coding:utf-8 -*-
__all__ = []

View File

@ -0,0 +1,72 @@
import click
from flask.cli import with_appcontext
@click.command()
@with_appcontext
def init_acl():
"""
acl init
"""
from api.models.acl import Role
from api.models.acl import App
from api.tasks.acl import role_rebuild
from api.lib.perm.acl.const import ACL_QUEUE
roles = Role.get_by(to_dict=False)
apps = App.get_by(to_dict=False)
for role in roles:
if role.app_id:
role_rebuild.apply_async(args=(role.id, role.app_id), queue=ACL_QUEUE)
else:
for app in apps:
role_rebuild.apply_async(args=(role.id, app.id), queue=ACL_QUEUE)
# @click.command()
# @with_appcontext
# def acl_clean():
# from api.models.acl import Resource
# from api.models.acl import Permission
# from api.models.acl import RolePermission
#
# perms = RolePermission.get_by(to_dict=False)
#
# for r in perms:
# perm = Permission.get_by_id(r.perm_id)
# if perm and perm.app_id != r.app_id:
# resource_id = r.resource_id
# resource = Resource.get_by_id(resource_id)
# perm_name = perm.name
# existed = Permission.get_by(resource_type_id=resource.resource_type_id, name=perm_name, first=True,
# to_dict=False)
# if existed is not None:
# other = RolePermission.get_by(rid=r.rid, perm_id=existed.id, resource_id=resource_id)
# if not other:
# r.update(perm_id=existed.id)
# else:
# r.soft_delete()
# else:
# r.soft_delete()
#
#
# @click.command()
# @with_appcontext
# def acl_has_resource_role():
# from api.models.acl import Role
# from api.models.acl import App
# from api.lib.perm.acl.cache import HasResourceRoleCache
# from api.lib.perm.acl.role import RoleCRUD
#
# roles = Role.get_by(to_dict=False)
# apps = App.get_by(to_dict=False)
# for role in roles:
# if role.app_id:
# res = RoleCRUD.recursive_resources(role.id, role.app_id)
# if res.get('resources') or res.get('groups'):
# HasResourceRoleCache.add(role.id, role.app_id)
# else:
# for app in apps:
# res = RoleCRUD.recursive_resources(role.id, app.id)
# if res.get('resources') or res.get('groups'):
# HasResourceRoleCache.add(role.id, app.id)

View File

@ -0,0 +1,313 @@
# -*- coding:utf-8 -*-
import copy
import datetime
import json
import time
import click
from flask import current_app
from flask.cli import with_appcontext
from flask_login import login_user
import api.lib.cmdb.ci
from api.extensions import db
from api.extensions import rd
from api.lib.cmdb.cache import AttributeCache
from api.lib.cmdb.const import PermEnum
from api.lib.cmdb.const import REDIS_PREFIX_CI
from api.lib.cmdb.const import REDIS_PREFIX_CI_RELATION
from api.lib.cmdb.const import ResourceTypeEnum
from api.lib.cmdb.const import RoleEnum
from api.lib.cmdb.const import ValueTypeEnum
from api.lib.exception import AbortException
from api.lib.perm.acl.acl import ACLManager
from api.lib.perm.acl.acl import UserCache
from api.lib.perm.acl.cache import AppCache
from api.lib.perm.acl.resource import ResourceCRUD
from api.lib.perm.acl.resource import ResourceTypeCRUD
from api.lib.perm.acl.role import RoleCRUD
from api.lib.perm.acl.user import UserCRUD
from api.models.acl import App
from api.models.acl import ResourceType
from api.models.cmdb import Attribute
from api.models.cmdb import CI
from api.models.cmdb import CIRelation
from api.models.cmdb import CIType
from api.models.cmdb import CITypeTrigger
from api.models.cmdb import PreferenceRelationView
@click.command()
@with_appcontext
def cmdb_init_cache():
db.session.remove()
ci_relations = CIRelation.get_by(to_dict=False)
relations = dict()
for cr in ci_relations:
relations.setdefault(cr.first_ci_id, {}).update({cr.second_ci_id: cr.second_ci.type_id})
for i in relations:
relations[i] = json.dumps(relations[i])
if relations:
rd.create_or_update(relations, REDIS_PREFIX_CI_RELATION)
if current_app.config.get("USE_ES"):
from api.extensions import es
from api.models.cmdb import Attribute
from api.lib.cmdb.utils import ValueTypeMap
attributes = Attribute.get_by(to_dict=False)
for attr in attributes:
other = dict()
other['index'] = True if attr.is_index else False
if attr.value_type == ValueTypeEnum.TEXT:
other['analyzer'] = 'ik_max_word'
other['search_analyzer'] = 'ik_smart'
if attr.is_index:
other["fields"] = {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
try:
es.update_mapping(attr.name, ValueTypeMap.es_type[attr.value_type], other)
except Exception as e:
print(e)
cis = CI.get_by(to_dict=False)
for ci in cis:
if current_app.config.get("USE_ES"):
res = es.get_index_id(ci.id)
if res:
continue
else:
res = rd.get([ci.id], REDIS_PREFIX_CI)
if res and list(filter(lambda x: x, res)):
continue
m = api.lib.cmdb.ci.CIManager()
ci_dict = m.get_ci_by_id_from_db(ci.id, need_children=False, use_master=False)
if current_app.config.get("USE_ES"):
es.create(ci_dict)
else:
rd.create_or_update({ci.id: json.dumps(ci_dict)}, REDIS_PREFIX_CI)
db.session.remove()
@click.command()
@with_appcontext
def cmdb_init_acl():
_app = AppCache.get('cmdb') or App.create(name='cmdb')
app_id = _app.id
# 1. add resource type
for resource_type in ResourceTypeEnum.all():
try:
ResourceTypeCRUD.add(app_id, resource_type, '', PermEnum.all())
except AbortException:
pass
# 2. add role
try:
RoleCRUD.add_role(RoleEnum.CONFIG, app_id, True)
except AbortException:
pass
try:
RoleCRUD.add_role(RoleEnum.CMDB_READ_ALL, app_id, False)
except AbortException:
pass
# 3. add resource and grant
ci_types = CIType.get_by(to_dict=False)
type_id = ResourceType.get_by(name=ResourceTypeEnum.CI, first=True, to_dict=False).id
for ci_type in ci_types:
try:
ResourceCRUD.add(ci_type.name, type_id, app_id)
except AbortException:
pass
ACLManager().grant_resource_to_role(ci_type.name,
RoleEnum.CMDB_READ_ALL,
ResourceTypeEnum.CI,
[PermEnum.READ])
relation_views = PreferenceRelationView.get_by(to_dict=False)
type_id = ResourceType.get_by(name=ResourceTypeEnum.RELATION_VIEW, first=True, to_dict=False).id
for view in relation_views:
try:
ResourceCRUD.add(view.name, type_id, app_id)
except AbortException:
pass
ACLManager().grant_resource_to_role(view.name,
RoleEnum.CMDB_READ_ALL,
ResourceTypeEnum.RELATION_VIEW,
[PermEnum.READ])
@click.command()
@click.option(
'-u',
'--user',
help='username'
)
@click.option(
'-p',
'--password',
help='password'
)
@click.option(
'-m',
'--mail',
help='mail'
)
@with_appcontext
def add_user(user, password, mail):
"""
create a user
is_admin: default is False
Example: flask add-user -u <username> -p <password> -m <mail>
"""
assert user is not None
assert password is not None
assert mail is not None
UserCRUD.add(username=user, password=password, email=mail)
@click.command()
@click.option(
'-u',
'--user',
help='username'
)
@with_appcontext
def del_user(user):
"""
delete a user
Example: flask del-user -u <username>
"""
assert user is not None
from api.models.acl import User
u = User.get_by(username=user, first=True, to_dict=False)
u and UserCRUD.delete(u.uid)
@click.command()
@with_appcontext
def cmdb_counter():
"""
Dashboard calculations
"""
from api.lib.cmdb.cache import CMDBCounterCache
current_app.test_request_context().push()
login_user(UserCache.get('worker'))
while True:
try:
db.session.remove()
CMDBCounterCache.reset()
except:
import traceback
print(traceback.format_exc())
time.sleep(60)
@click.command()
@with_appcontext
def cmdb_trigger():
"""
Trigger execution for date attribute
"""
from api.lib.cmdb.ci import CITriggerManager
current_day = datetime.datetime.today().strftime("%Y-%m-%d")
trigger2cis = dict()
trigger2completed = dict()
i = 0
while True:
try:
db.session.remove()
if datetime.datetime.today().strftime("%Y-%m-%d") != current_day:
trigger2cis = dict()
trigger2completed = dict()
current_day = datetime.datetime.today().strftime("%Y-%m-%d")
if i == 3 or i == 0:
i = 0
triggers = CITypeTrigger.get_by(to_dict=False, __func_isnot__key_attr_id=None)
for trigger in triggers:
try:
ready_cis = CITriggerManager.waiting_cis(trigger)
except Exception as e:
print(e)
continue
if trigger.id not in trigger2cis:
trigger2cis[trigger.id] = (trigger, ready_cis)
else:
cur = trigger2cis[trigger.id]
cur_ci_ids = {i.ci_id for i in cur[1]}
trigger2cis[trigger.id] = (
trigger, cur[1] + [i for i in ready_cis if i.ci_id not in cur_ci_ids
and i.ci_id not in trigger2completed.get(trigger.id, {})])
for tid in trigger2cis:
trigger, cis = trigger2cis[tid]
for ci in copy.deepcopy(cis):
if CITriggerManager.trigger_notify(trigger, ci):
trigger2completed.setdefault(trigger.id, set()).add(ci.ci_id)
for _ci in cis:
if _ci.ci_id == ci.ci_id:
cis.remove(_ci)
i += 1
time.sleep(10)
except Exception as e:
import traceback
print(traceback.format_exc())
current_app.logger.error("cmdb trigger exception: {}".format(e))
time.sleep(60)
@click.command()
@with_appcontext
def cmdb_index_table_upgrade():
"""
Migrate data from tables c_value_integers, c_value_floats, and c_value_datetime
"""
for attr in Attribute.get_by(to_dict=False):
if attr.value_type not in {ValueTypeEnum.TEXT, ValueTypeEnum.JSON} and not attr.is_index:
attr.update(is_index=True)
AttributeCache.clean(attr)
from api.models.cmdb import CIValueInteger, CIIndexValueInteger
from api.models.cmdb import CIValueFloat, CIIndexValueFloat
from api.models.cmdb import CIValueDateTime, CIIndexValueDateTime
for i in CIValueInteger.get_by(to_dict=False):
CIIndexValueInteger.create(ci_id=i.ci_id, attr_id=i.attr_id, value=i.value, commit=False)
i.delete(commit=False)
db.session.commit()
for i in CIValueFloat.get_by(to_dict=False):
CIIndexValueFloat.create(ci_id=i.ci_id, attr_id=i.attr_id, value=i.value, commit=False)
i.delete(commit=False)
db.session.commit()
for i in CIValueDateTime.get_by(to_dict=False):
CIIndexValueDateTime.create(ci_id=i.ci_id, attr_id=i.attr_id, value=i.value, commit=False)
i.delete(commit=False)
db.session.commit()

View File

@ -0,0 +1,288 @@
import click
from flask import current_app
from flask.cli import with_appcontext
from werkzeug.datastructures import MultiDict
from api.lib.common_setting.acl import ACLManager
from api.lib.common_setting.employee import EmployeeAddForm
from api.lib.common_setting.resp_format import ErrFormat
from api.models.common_setting import Employee, Department
class InitEmployee(object):
"""
初始化员工
"""
def __init__(self):
self.log = current_app.logger
def import_user_from_acl(self):
"""
Import users from ACL
"""
InitDepartment().init()
acl = ACLManager('acl')
user_list = acl.get_all_users()
username_list = [e['username'] for e in Employee.get_by()]
for user in user_list:
acl_uid = user['uid']
block = 1 if user['block'] else 0
acl_rid = self.get_rid_by_uid(acl_uid)
if user['username'] in username_list:
existed = Employee.get_by(first=True, username=user['username'], to_dict=False)
if existed:
existed.update(
acl_uid=acl_uid,
acl_rid=acl_rid,
block=block,
)
continue
try:
form = EmployeeAddForm(MultiDict(user))
if not form.validate():
raise Exception(
','.join(['{}: {}'.format(filed, ','.join(msg)) for filed, msg in form.errors.items()]))
data = form.data
data['acl_uid'] = acl_uid
data['acl_rid'] = acl_rid
data['block'] = block
data.pop('password')
Employee.create(
**data
)
except Exception as e:
self.log.error(ErrFormat.acl_import_user_failed.format(user['username'], str(e)))
self.log.error(e)
def get_rid_by_uid(self, uid):
from api.models.acl import Role
role = Role.get_by(first=True, uid=uid)
return role['id'] if role is not None else 0
class InitDepartment(object):
def __init__(self):
self.log = current_app.logger
def init(self):
self.init_wide_company()
def hard_delete(self, department_id, department_name):
existed_deleted_list = Department.query.filter(
Department.department_name == department_name,
Department.department_id == department_id,
Department.deleted == 1,
).all()
for existed in existed_deleted_list:
existed.delete()
def get_department(self, department_name):
return Department.query.filter(
Department.department_name == department_name,
Department.deleted == 0,
).order_by(Department.created_at.asc()).first()
def run(self, department_id, department_name, department_parent_id):
self.hard_delete(department_id, department_name)
res = self.get_department(department_name)
if res:
if res.department_id == department_id:
return
else:
new_d = res.update(
department_id=department_id,
department_parent_id=department_parent_id,
)
return
Department.create(
department_id=department_id,
department_name=department_name,
department_parent_id=department_parent_id,
)
new_d = self.get_department(department_name)
if new_d.department_id != department_id:
new_d = new_d.update(
department_id=department_id,
department_parent_id=department_parent_id,
)
self.log.info(f"初始化 {department_name} 部门成功.")
def run_common(self, department_id, department_name, department_parent_id):
try:
self.run(department_id, department_name, department_parent_id)
except Exception as e:
current_app.logger.error(f"init {department_name} err:")
current_app.logger.error(e)
raise Exception(e)
def init_wide_company(self):
"""
创建 id 0, name 全公司 的部门
"""
department_id = 0
department_name = '全公司'
department_parent_id = -1
self.run_common(department_id, department_name, department_parent_id)
def create_acl_role_with_department(self):
"""
当前所有部门在ACL创建 role
"""
acl = ACLManager('acl')
role_name_map = {role['name']: role for role in acl.get_all_roles()}
d_list = Department.query.filter(
Department.deleted == 0, Department.department_parent_id != -1).all()
for department in d_list:
if department.acl_rid > 0:
continue
role = role_name_map.get(department.department_name)
if role is None:
payload = {
'app_id': 'acl',
'name': department.department_name,
}
role = acl.create_role(payload)
acl_rid = role.get('id') if role else 0
department.update(
acl_rid=acl_rid
)
info = f"update department acl_rid: {acl_rid}"
current_app.logger.info(info)
def init_backend_resource(self):
acl = self.check_app('backend')
resources_types = acl.get_all_resources_types()
results = list(filter(lambda t: t['name'] == '操作权限', resources_types['groups']))
if len(results) == 0:
payload = dict(
app_id=acl.app_name,
name='操作权限',
description='',
perms=['read', 'grant', 'delete', 'update']
)
resource_type = acl.create_resources_type(payload)
else:
resource_type = results[0]
for name in ['公司信息', '公司架构', '通知设置']:
payload = dict(
type_id=resource_type['id'],
app_id=acl.app_name,
name=name,
)
try:
acl.create_resource(payload)
except Exception as e:
if '已经存在' in str(e):
pass
else:
raise Exception(e)
def check_app(self, app_name):
acl = ACLManager(app_name)
payload = dict(
name=app_name,
description=app_name
)
try:
app = acl.validate_app()
if app:
return acl
acl.create_app(payload)
except Exception as e:
current_app.logger.error(e)
if '不存在' in str(e):
acl.create_app(payload)
return acl
raise Exception(e)
@click.command()
@with_appcontext
def init_import_user_from_acl():
"""
Import users from ACL
"""
InitEmployee().import_user_from_acl()
@click.command()
@with_appcontext
def init_department():
"""
Department initialization
"""
cli = InitDepartment()
cli.init_wide_company()
cli.create_acl_role_with_department()
cli.init_backend_resource()
@click.command()
@with_appcontext
def common_check_new_columns():
"""
add new columns to tables
"""
from api.extensions import db
from sqlalchemy import inspect, text
def get_model_by_table_name(table_name):
for model in db.Model.registry._class_registry.values():
if hasattr(model, '__tablename__') and model.__tablename__ == table_name:
return model
return None
def add_new_column(table_name, new_column):
column_type = new_column.type.compile(engine.dialect)
default_value = new_column.default.arg if new_column.default else None
sql = f"ALTER TABLE {table_name} ADD COLUMN {new_column.name} {column_type} "
if new_column.comment:
sql += f" comment '{new_column.comment}'"
if column_type == 'JSON':
pass
elif default_value:
if column_type.startswith('VAR') or column_type.startswith('Text'):
if default_value is None or len(default_value) == 0:
pass
else:
sql += f" DEFAULT {default_value}"
sql = text(sql)
db.session.execute(sql)
engine = db.get_engine()
inspector = inspect(engine)
table_names = inspector.get_table_names()
for table_name in table_names:
existed_columns = inspector.get_columns(table_name)
existed_column_name_list = [c['name'] for c in existed_columns]
model = get_model_by_table_name(table_name)
if model is None:
continue
model_columns = model.__table__.columns._all_columns
for column in model_columns:
if column.name not in existed_column_name_list:
try:
add_new_column(table_name, column)
current_app.logger.info(f"add new column [{column.name}] in table [{table_name}] success.")
except Exception as e:
current_app.logger.error(f"add new column [{column.name}] in table [{table_name}] err:")
current_app.logger.error(e)

View File

@ -0,0 +1,152 @@
# -*- coding: utf-8 -*-
"""Click commands."""
import os
from glob import glob
from subprocess import call
import click
from flask import current_app
from flask.cli import with_appcontext
from werkzeug.exceptions import MethodNotAllowed, NotFound
from api.extensions import db
HERE = os.path.abspath(os.path.dirname(__file__))
PROJECT_ROOT = os.path.join(HERE, os.pardir, os.pardir)
TEST_PATH = os.path.join(PROJECT_ROOT, "tests")
@click.command()
def test():
"""Run the tests."""
import pytest
rv = pytest.main([TEST_PATH, "--verbose"])
exit(rv)
@click.command()
@click.option(
"-f",
"--fix-imports",
default=True,
is_flag=True,
help="Fix imports using isort, before linting",
)
@click.option(
"-c",
"--check",
default=False,
is_flag=True,
help="Don't make any changes to files, just confirm they are formatted correctly",
)
def lint(fix_imports, check):
"""Lint and check code style with black, flake8 and isort."""
skip = ["node_modules", "requirements", "migrations"]
root_files = glob("*.py")
root_directories = [
name for name in next(os.walk("."))[1] if not name.startswith(".")
]
files_and_directories = [
arg for arg in root_files + root_directories if arg not in skip
]
def execute_tool(description, *args):
"""Execute a checking tool with its arguments."""
command_line = list(args) + files_and_directories
click.echo("{}: {}".format(description, " ".join(command_line)))
rv = call(command_line)
if rv != 0:
exit(rv)
isort_args = ["-rc"]
black_args = []
if check:
isort_args.append("-c")
black_args.append("--check")
if fix_imports:
execute_tool("Fixing import order", "isort", *isort_args)
execute_tool("Formatting style", "black", *black_args)
execute_tool("Checking code style", "flake8")
@click.command()
def clean():
"""Remove *.pyc and *.pyo files recursively starting at current directory.
Borrowed from Flask-Script, converted to use Click.
"""
for dirpath, dirnames, filenames in os.walk("."):
for filename in filenames:
if filename.endswith(".pyc") or filename.endswith(".pyo") or filename.endswith(".c"):
full_pathname = os.path.join(dirpath, filename)
click.echo("Removing {}".format(full_pathname))
os.remove(full_pathname)
@click.command()
@click.option("--url", default=None, help="Url to test (ex. /static/image.png)")
@click.option(
"--order", default="rule", help="Property on Rule to order by (default: rule)"
)
@with_appcontext
def urls(url, order):
"""Display all of the url matching routes for the project.
Borrowed from Flask-Script, converted to use Click.
"""
rows = []
column_headers = ("Rule", "Endpoint", "Arguments")
if url:
try:
rule, arguments = current_app.url_map.bind("localhost").match(
url, return_rule=True
)
rows.append((rule.rule, rule.endpoint, arguments))
column_length = 3
except (NotFound, MethodNotAllowed) as e:
rows.append(("<{}>".format(e), None, None))
column_length = 1
else:
rules = sorted(
current_app.url_map.iter_rules(), key=lambda rule: getattr(rule, order)
)
for rule in rules:
rows.append((rule.rule, rule.endpoint, None))
column_length = 2
str_template = ""
table_width = 0
if column_length >= 1:
max_rule_length = max(len(r[0]) for r in rows)
max_rule_length = max_rule_length if max_rule_length > 4 else 4
str_template += "{:" + str(max_rule_length) + "}"
table_width += max_rule_length
if column_length >= 2:
max_endpoint_length = max(len(str(r[1])) for r in rows)
max_endpoint_length = max_endpoint_length if max_endpoint_length > 8 else 8
str_template += " {:" + str(max_endpoint_length) + "}"
table_width += 2 + max_endpoint_length
if column_length >= 3:
max_arguments_length = max(len(str(r[2])) for r in rows)
max_arguments_length = max_arguments_length if max_arguments_length > 9 else 9
str_template += " {:" + str(max_arguments_length) + "}"
table_width += 2 + max_arguments_length
click.echo(str_template.format(*column_headers[:column_length]))
click.echo("-" * table_width)
for row in rows:
click.echo(str_template.format(*row[:column_length]))
@click.command()
@with_appcontext
def db_setup():
"""create tables
"""
db.create_all()

View File

@ -0,0 +1,23 @@
# -*- coding:utf-8 -*-
from celery import Celery
from flask_bcrypt import Bcrypt
from flask_caching import Cache
from flask_cors import CORS
from flask_login import LoginManager
from flask_migrate import Migrate
from flask_sqlalchemy import SQLAlchemy
from api.lib.utils import ESHandler
from api.lib.utils import RedisHandler
bcrypt = Bcrypt()
login_manager = LoginManager()
db = SQLAlchemy(session_options={"autoflush": False})
migrate = Migrate()
cache = Cache()
celery = Celery()
cors = CORS(supports_credentials=True)
rd = RedisHandler()
es = ESHandler()

View File

@ -0,0 +1,78 @@
# -*- coding:utf-8 -*-
"""
flask_cas.__init__
"""
import flask
from flask import current_app
# Find the stack on which we want to store the database connection.
# Starting with Flask 0.9, the _app_ctx_stack is the correct one,
# before that we need to use the _request_ctx_stack.
try:
from flask import _app_ctx_stack as stack
except ImportError:
from flask import _request_ctx_stack as stack
from api.flask_cas import routing
class CAS(object):
"""
Required Configs:
|Key |
|----------------|
|CAS_SERVER |
|CAS_AFTER_LOGIN |
Optional Configs:
|Key | Default |
|-------------------------|----------------|
|CAS_TOKEN_SESSION_KEY | _CAS_TOKEN |
|CAS_USERNAME_SESSION_KEY | CAS_USERNAME |
|CAS_LOGIN_ROUTE | '/cas' |
|CAS_LOGOUT_ROUTE | '/cas/logout' |
|CAS_VALIDATE_ROUTE | '/cas/validate'|
"""
def __init__(self, app=None, url_prefix=None):
self._app = app
if app is not None:
self.init_app(app, url_prefix)
def init_app(self, app, url_prefix=None):
# Configuration defaults
app.config.setdefault('CAS_TOKEN_SESSION_KEY', '_CAS_TOKEN')
app.config.setdefault('CAS_USERNAME_SESSION_KEY', 'CAS_USERNAME')
app.config.setdefault('CAS_LOGIN_ROUTE', '/login')
app.config.setdefault('CAS_LOGOUT_ROUTE', '/logout')
app.config.setdefault('CAS_VALIDATE_ROUTE', '/serviceValidate')
# Register Blueprint
app.register_blueprint(routing.blueprint, url_prefix=url_prefix)
# Use the newstyle teardown_appcontext if it's available,
# otherwise fall back to the request context
if hasattr(app, 'teardown_appcontext'):
app.teardown_appcontext(self.teardown)
else:
app.teardown_request(self.teardown)
def teardown(self, exception):
ctx = stack.top
@property
def app(self):
return self._app or current_app
@property
def username(self):
return flask.session.get(
self.app.config['CAS_USERNAME_SESSION_KEY'], None)
@property
def token(self):
return flask.session.get(
self.app.config['CAS_TOKEN_SESSION_KEY'], None)

View File

@ -0,0 +1,122 @@
# -*- coding:utf-8 -*-
"""
flask_cas.cas_urls
Functions for creating urls to access CAS.
"""
from six.moves.urllib.parse import quote
from six.moves.urllib.parse import urlencode
from six.moves.urllib.parse import urljoin
def create_url(base, path=None, *query):
""" Create a url.
Creates a url by combining base, path, and the query's list of
key/value pairs. Escaping is handled automatically. Any
key/value pair with a value that is None is ignored.
Keyword arguments:
base -- The left most part of the url (ex. http://localhost:5000).
path -- The path after the base (ex. /foo/bar).
query -- A list of key value pairs (ex. [('key', 'value')]).
Example usage:
>>> create_url(
... 'http://localhost:5000',
... 'foo/bar',
... ('key1', 'value'),
... ('key2', None), # Will not include None
... ('url', 'http://example.com'),
... )
'http://localhost:5000/foo/bar?key1=value&url=http%3A%2F%2Fexample.com'
"""
url = base
# Add the path to the url if it's not None.
if path is not None:
url = urljoin(url, quote(path))
# Remove key/value pairs with None values.
query = filter(lambda pair: pair[1] is not None, query)
# Add the query string to the url
url = urljoin(url, '?{0}'.format(urlencode(list(query))))
return url
def create_cas_login_url(cas_url, cas_route, service,
renew=None, gateway=None):
""" Create a CAS login URL .
Keyword arguments:
cas_url -- The url to the CAS (ex. http://sso.pdx.edu)
cas_route -- The route where the CAS lives on server (ex. /cas)
service -- (ex. http://localhost:5000/login)
renew -- "true" or "false"
gateway -- "true" or "false"
Example usage:
>>> create_cas_login_url(
... 'http://sso.pdx.edu',
... '/cas',
... 'http://localhost:5000',
... )
'http://sso.pdx.edu/cas?service=http%3A%2F%2Flocalhost%3A5000'
"""
return create_url(
cas_url,
cas_route,
('service', service),
('renew', renew),
('gateway', gateway),
)
def create_cas_logout_url(cas_url, cas_route, url=None):
""" Create a CAS logout URL.
Keyword arguments:
cas_url -- The url to the CAS (ex. http://sso.pdx.edu)
cas_route -- The route where the CAS lives on server (ex. /cas/logout)
url -- (ex. http://localhost:5000/login)
Example usage:
>>> create_cas_logout_url(
... 'http://sso.pdx.edu',
... '/cas/logout',
... 'http://localhost:5000',
... )
'http://sso.pdx.edu/cas/logout?url=http%3A%2F%2Flocalhost%3A5000'
"""
return create_url(
cas_url,
cas_route,
('service', url),
)
def create_cas_validate_url(cas_url, cas_route, service, ticket,
renew=None):
""" Create a CAS validate URL.
Keyword arguments:
cas_url -- The url to the CAS (ex. http://sso.pdx.edu)
cas_route -- The route where the CAS lives on server (ex. /cas/validate)
service -- (ex. http://localhost:5000/login)
ticket -- (ex. 'ST-58274-x839euFek492ou832Eena7ee-cas')
renew -- "true" or "false"
Example usage:
>>> create_cas_validate_url(
... 'http://sso.pdx.edu',
... '/cas/validate',
... 'http://localhost:5000/login',
... 'ST-58274-x839euFek492ou832Eena7ee-cas'
... )
"""
return create_url(
cas_url,
cas_route,
('service', service),
('ticket', ticket),
('renew', renew),
)

View File

@ -0,0 +1,167 @@
# -*- coding:utf-8 -*-
import json
import bs4
from flask import Blueprint
from flask import current_app, session, request, url_for, redirect
from flask_login import login_user, logout_user
from six.moves.urllib_request import urlopen
from api.lib.perm.acl.cache import UserCache
from .cas_urls import create_cas_login_url
from .cas_urls import create_cas_logout_url
from .cas_urls import create_cas_validate_url
blueprint = Blueprint('cas', __name__)
@blueprint.route('/api/sso/login')
def login():
"""
This route has two purposes. First, it is used by the user
to login. Second, it is used by the CAS to respond with the
`ticket` after the user logs in successfully.
When the user accesses this url, they are redirected to the CAS
to login. If the login was successful, the CAS will respond to this
route with the ticket in the url. The ticket is then validated.
If validation was successful the logged in username is saved in
the user's session under the key `CAS_USERNAME_SESSION_KEY`.
"""
cas_token_session_key = current_app.config['CAS_TOKEN_SESSION_KEY']
if request.values.get("next"):
session["next"] = request.values.get("next")
_service = url_for('cas.login', _external=True, next=session["next"]) \
if session.get("next") else url_for('cas.login', _external=True)
redirect_url = create_cas_login_url(
current_app.config['CAS_SERVER'],
current_app.config['CAS_LOGIN_ROUTE'],
_service)
if 'ticket' in request.args:
session[cas_token_session_key] = request.args.get('ticket')
if request.args.get('ticket'):
if validate(request.args['ticket']):
redirect_url = session.get("next") or \
current_app.config.get("CAS_AFTER_LOGIN")
username = session.get("CAS_USERNAME")
user = UserCache.get(username)
login_user(user)
session.permanent = True
else:
del session[cas_token_session_key]
redirect_url = create_cas_login_url(
current_app.config['CAS_SERVER'],
current_app.config['CAS_LOGIN_ROUTE'],
url_for('cas.login', _external=True),
renew=True)
current_app.logger.info("redirect to: {0}".format(redirect_url))
return redirect(redirect_url)
@blueprint.route('/api/sso/logout')
def logout():
"""
When the user accesses this route they are logged out.
"""
cas_username_session_key = current_app.config['CAS_USERNAME_SESSION_KEY']
cas_token_session_key = current_app.config['CAS_TOKEN_SESSION_KEY']
cas_username_session_key in session and session.pop(cas_username_session_key)
"acl" in session and session.pop("acl")
"uid" in session and session.pop("uid")
cas_token_session_key in session and session.pop(cas_token_session_key)
"next" in session and session.pop("next")
redirect_url = create_cas_logout_url(
current_app.config['CAS_SERVER'],
current_app.config['CAS_LOGOUT_ROUTE'],
url_for('cas.login', _external=True, next=request.referrer))
logout_user()
current_app.logger.debug('Redirecting to: {0}'.format(redirect_url))
return redirect(redirect_url)
def validate(ticket):
"""
Will attempt to validate the ticket. If validation fails, then False
is returned. If validation is successful, then True is returned
and the validated username is saved in the session under the
key `CAS_USERNAME_SESSION_KEY`.
"""
cas_username_session_key = current_app.config['CAS_USERNAME_SESSION_KEY']
current_app.logger.debug("validating token {0}".format(ticket))
cas_validate_url = create_cas_validate_url(
current_app.config['CAS_VALIDATE_SERVER'],
current_app.config['CAS_VALIDATE_ROUTE'],
url_for('cas.login', _external=True),
ticket)
current_app.logger.debug("Making GET request to {0}".format(cas_validate_url))
try:
response = urlopen(cas_validate_url).read()
ticketid = _parse_tag(response, "cas:user")
strs = [s.strip() for s in ticketid.split('|') if s.strip()]
username, is_valid = None, False
if len(strs) == 1:
username = strs[0]
is_valid = True
user_info = json.loads(_parse_tag(response, "cas:other"))
current_app.logger.info(user_info)
except ValueError:
current_app.logger.error("CAS returned unexpected result")
is_valid = False
return is_valid
if is_valid:
current_app.logger.debug("valid")
session[cas_username_session_key] = username
user = UserCache.get(username)
from api.lib.perm.acl.acl import ACLManager
user_info = ACLManager.get_user_info(username)
session["acl"] = dict(uid=user_info.get("uid"),
avatar=user.avatar if user else user_info.get("avatar"),
userId=user_info.get("uid"),
rid=user_info.get("rid"),
userName=user_info.get("username"),
nickName=user_info.get("nickname"),
parentRoles=user_info.get("parents"),
childRoles=user_info.get("children"),
roleName=user_info.get("role"))
session["uid"] = user_info.get("uid")
current_app.logger.debug(session)
current_app.logger.debug(request.url)
else:
current_app.logger.debug("invalid")
return is_valid
def _parse_tag(string, tag):
"""
Used for parsing xml. Search string for the first occurence of
<tag>.....</tag> and return text (stripped of leading and tailing
whitespace) between tags. Return "" if tag not found.
"""
soup = bs4.BeautifulSoup(string)
if soup.find(tag) is None:
return ''
return soup.find(tag).string.strip()

View File

@ -0,0 +1 @@
# -*- coding:utf-8 -*-

View File

@ -0,0 +1 @@
# -*- coding:utf-8 -*-

View File

@ -0,0 +1,417 @@
# -*- coding:utf-8 -*-
from flask import abort
from flask import current_app
from flask import session
from flask_login import current_user
from api.extensions import db
from api.lib.cmdb.cache import AttributeCache
from api.lib.cmdb.cache import CITypeAttributesCache
from api.lib.cmdb.cache import CITypeCache
from api.lib.cmdb.const import BUILTIN_KEYWORDS
from api.lib.cmdb.const import CITypeOperateType
from api.lib.cmdb.const import CMDB_QUEUE
from api.lib.cmdb.const import PermEnum
from api.lib.cmdb.const import ResourceTypeEnum
from api.lib.cmdb.const import RoleEnum
from api.lib.cmdb.const import ValueTypeEnum
from api.lib.cmdb.history import CITypeHistoryManager
from api.lib.cmdb.resp_format import ErrFormat
from api.lib.cmdb.utils import ValueTypeMap
from api.lib.decorator import kwargs_required
from api.lib.perm.acl.acl import is_app_admin
from api.lib.perm.acl.acl import validate_permission
from api.lib.webhook import webhook_request
from api.models.cmdb import Attribute
from api.models.cmdb import CIType
from api.models.cmdb import CITypeAttribute
from api.models.cmdb import CITypeAttributeGroupItem
from api.models.cmdb import PreferenceShowAttributes
class AttributeManager(object):
"""
CI attributes manager
"""
cls = Attribute
def __init__(self):
pass
@staticmethod
def _get_choice_values_from_webhook(choice_webhook, payload=None):
ret_key = choice_webhook.get('ret_key')
try:
res = webhook_request(choice_webhook, payload or {}).json()
if ret_key:
ret_key_list = ret_key.strip().split("##")
for key in ret_key_list[:-1]:
if key in res:
res = res[key]
if isinstance(res, list):
return [[i[ret_key_list[-1]], {}] for i in res if i.get(ret_key_list[-1])]
return [[i, {}] for i in (res.get(ret_key_list[-1]) or [])]
except Exception as e:
current_app.logger.error("get choice values failed: {}".format(e))
return []
@staticmethod
def _get_choice_values_from_other_ci(choice_other):
from api.lib.cmdb.search import SearchError
from api.lib.cmdb.search.ci import search
type_ids = choice_other.get('type_ids')
attr_id = choice_other.get('attr_id')
other_filter = choice_other.get('filter') or ''
query = "_type:({}),{}".format(";".join(map(str, type_ids)), other_filter)
s = search(query, fl=[str(attr_id)], facet=[str(attr_id)], count=1)
try:
_, _, _, _, _, facet = s.search()
return [[i[0], {}] for i in (list(facet.values()) or [[]])[0]]
except SearchError as e:
current_app.logger.error("get choice values from other ci failed: {}".format(e))
return []
@classmethod
def get_choice_values(cls, attr_id, value_type, choice_web_hook, choice_other,
choice_web_hook_parse=True, choice_other_parse=True):
if choice_web_hook:
if choice_web_hook_parse and isinstance(choice_web_hook, dict):
return cls._get_choice_values_from_webhook(choice_web_hook)
else:
return []
elif choice_other:
if choice_other_parse and isinstance(choice_other, dict):
return cls._get_choice_values_from_other_ci(choice_other)
else:
return []
choice_table = ValueTypeMap.choice.get(value_type)
if not choice_table:
return []
choice_values = choice_table.get_by(fl=["value", "option"], attr_id=attr_id)
return [[choice_value['value'], choice_value['option']] for choice_value in choice_values]
@staticmethod
def add_choice_values(_id, value_type, choice_values):
choice_table = ValueTypeMap.choice.get(value_type)
if choice_table is None:
return
choice_table.get_by(attr_id=_id, only_query=True).delete()
for v, option in choice_values:
choice_table.create(attr_id=_id, value=v, option=option, commit=False)
try:
db.session.flush()
except Exception as e:
current_app.logger.warning("add choice values failed: {}".format(e))
return abort(400, ErrFormat.invalid_choice_values)
@staticmethod
def _del_choice_values(_id, value_type):
choice_table = ValueTypeMap.choice.get(value_type)
choice_table and choice_table.get_by(attr_id=_id, only_query=True).delete()
db.session.flush()
@classmethod
def search_attributes(cls, name=None, alias=None, page=1, page_size=None):
"""
:param name:
:param alias:
:param page:
:param page_size:
:return: attribute, if name is None, then return all attributes
"""
if name is not None:
attrs = Attribute.get_by_like(name=name)
elif alias is not None:
attrs = Attribute.get_by_like(alias=alias)
else:
attrs = Attribute.get_by()
numfound = len(attrs)
attrs = attrs[(page - 1) * page_size:][:page_size]
res = list()
for attr in attrs:
attr["is_choice"] and attr.update(
dict(choice_value=cls.get_choice_values(attr["id"], attr["value_type"],
attr["choice_web_hook"], attr.get("choice_other"))))
attr['is_choice'] and attr.pop('choice_web_hook', None)
res.append(attr)
return numfound, res
def get_attribute_by_name(self, name):
attr = Attribute.get_by(name=name, first=True)
if attr.get("is_choice"):
attr["choice_value"] = self.get_choice_values(attr["id"], attr["value_type"],
attr["choice_web_hook"], attr.get("choice_other"))
return attr
def get_attribute_by_alias(self, alias):
attr = Attribute.get_by(alias=alias, first=True)
if attr.get("is_choice"):
attr["choice_value"] = self.get_choice_values(attr["id"], attr["value_type"],
attr["choice_web_hook"], attr.get("choice_other"))
return attr
def get_attribute_by_id(self, _id):
attr = Attribute.get_by_id(_id).to_dict()
if attr.get("is_choice"):
attr["choice_value"] = self.get_choice_values(attr["id"], attr["value_type"],
attr["choice_web_hook"], attr.get("choice_other"))
return attr
def get_attribute(self, key, choice_web_hook_parse=True, choice_other_parse=True):
attr = AttributeCache.get(key).to_dict()
if attr.get("is_choice"):
attr["choice_value"] = self.get_choice_values(
attr["id"],
attr["value_type"],
attr["choice_web_hook"],
attr.get("choice_other"),
choice_web_hook_parse=choice_web_hook_parse,
choice_other_parse=choice_other_parse,
)
return attr
@staticmethod
def can_create_computed_attribute():
if RoleEnum.CONFIG not in session.get("acl", {}).get("parentRoles", []) and not is_app_admin('cmdb'):
return abort(403, ErrFormat.role_required.format(RoleEnum.CONFIG))
@classmethod
def calc_computed_attribute(cls, attr_id):
"""
calculate computed attribute for all ci
:param attr_id:
:return:
"""
cls.can_create_computed_attribute()
from api.tasks.cmdb import calc_computed_attribute
calc_computed_attribute.apply_async(args=(attr_id, current_user.uid), queue=CMDB_QUEUE)
@classmethod
@kwargs_required("name")
def add(cls, **kwargs):
choice_value = kwargs.pop("choice_value", [])
kwargs.pop("is_choice", None)
is_choice = True if choice_value or kwargs.get('choice_web_hook') or kwargs.get('choice_other') else False
name = kwargs.pop("name")
if name in BUILTIN_KEYWORDS:
return abort(400, ErrFormat.attribute_name_cannot_be_builtin)
if kwargs.get('choice_other'):
if (not isinstance(kwargs['choice_other'], dict) or not kwargs['choice_other'].get('type_ids') or
not kwargs['choice_other'].get('attr_id')):
return abort(400, ErrFormat.attribute_choice_other_invalid)
alias = kwargs.pop("alias", "")
alias = name if not alias else alias
Attribute.get_by(name=name, first=True) and abort(400, ErrFormat.attribute_name_duplicate.format(name))
if kwargs.get('default') and not (isinstance(kwargs['default'], dict) and 'default' in kwargs['default']):
kwargs['default'] = dict(default=kwargs['default'])
kwargs.get('is_computed') and cls.can_create_computed_attribute()
attr = Attribute.create(flush=True,
name=name,
alias=alias,
is_choice=is_choice,
uid=current_user.uid,
**kwargs)
if choice_value:
cls.add_choice_values(attr.id, attr.value_type, choice_value)
try:
db.session.commit()
except Exception as e:
db.session.rollback()
current_app.logger.error("add attribute error, {0}".format(str(e)))
return abort(400, ErrFormat.add_attribute_failed.format(name))
AttributeCache.clean(attr)
if current_app.config.get("USE_ES"):
from api.extensions import es
other = dict()
other['index'] = True if attr.is_index else False
if attr.value_type == ValueTypeEnum.TEXT:
other['analyzer'] = 'ik_max_word'
other['search_analyzer'] = 'ik_smart'
if attr.is_index:
other["fields"] = {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
es.update_mapping(name, ValueTypeMap.es_type[attr.value_type], other)
return attr.id
@staticmethod
def _clean_ci_type_attributes_cache(attr_id):
for i in CITypeAttribute.get_by(attr_id=attr_id, to_dict=False):
CITypeAttributesCache.clean(i.type_id)
@staticmethod
def _change_index(attr, old, new):
from api.lib.cmdb.utils import TableMap
from api.tasks.cmdb import batch_ci_cache
from api.lib.cmdb.const import CMDB_QUEUE
old_table = TableMap(attr=attr, is_index=old).table
new_table = TableMap(attr=attr, is_index=new).table
ci_ids = []
for i in old_table.get_by(attr_id=attr.id, to_dict=False):
new_table.create(ci_id=i.ci_id, attr_id=attr.id, value=i.value, flush=True)
ci_ids.append(i.ci_id)
old_table.get_by(attr_id=attr.id, only_query=True).delete()
try:
db.session.commit()
except Exception as e:
db.session.rollback()
current_app.logger.error(str(e))
return abort(400, ErrFormat.attribute_index_change_failed)
batch_ci_cache.apply_async(args=(ci_ids,), queue=CMDB_QUEUE)
@staticmethod
def _can_edit_attribute(attr):
from api.lib.cmdb.ci_type import CITypeManager
if attr.uid == current_user.uid:
return True
for i in CITypeAttribute.get_by(attr_id=attr.id, to_dict=False):
resource = CITypeManager.get_name_by_id(i.type_id)
if resource:
validate_permission(resource, ResourceTypeEnum.CI, PermEnum.CONFIG, "cmdb")
return True
def update(self, _id, **kwargs):
attr = Attribute.get_by_id(_id) or abort(404, ErrFormat.attribute_not_found.format("id={}".format(_id)))
if not self._can_edit_attribute(attr):
return abort(403, ErrFormat.cannot_edit_attribute)
if kwargs.get("name"):
other = Attribute.get_by(name=kwargs['name'], first=True, to_dict=False)
if other and other.id != attr.id:
return abort(400, ErrFormat.attribute_name_duplicate.format(kwargs['name']))
if attr.value_type != kwargs.get('value_type'):
return abort(400, ErrFormat.attribute_value_type_cannot_change)
if "is_list" in kwargs and kwargs['is_list'] != attr.is_list:
return abort(400, ErrFormat.attribute_list_value_cannot_change)
if "is_index" in kwargs and kwargs['is_index'] != attr.is_index:
if not is_app_admin("cmdb"):
return abort(400, ErrFormat.attribute_index_cannot_change)
self._change_index(attr, attr.is_index, kwargs['is_index'])
if kwargs.get('choice_other'):
if (not isinstance(kwargs['choice_other'], dict) or not kwargs['choice_other'].get('type_ids') or
not kwargs['choice_other'].get('attr_id')):
return abort(400, ErrFormat.attribute_choice_other_invalid)
existed2 = attr.to_dict()
if not existed2['choice_web_hook'] and not existed2.get('choice_other') and existed2['is_choice']:
existed2['choice_value'] = self.get_choice_values(attr.id, attr.value_type, None, None)
choice_value = kwargs.pop("choice_value", False)
is_choice = True if choice_value or kwargs.get('choice_web_hook') or kwargs.get('choice_other') else False
kwargs['is_choice'] = is_choice
if kwargs.get('default') and not (isinstance(kwargs['default'], dict) and 'default' in kwargs['default']):
kwargs['default'] = dict(default=kwargs['default'])
kwargs.get('is_computed') and self.can_create_computed_attribute()
attr.update(flush=True, filter_none=False, **kwargs)
if is_choice and choice_value:
self.add_choice_values(attr.id, attr.value_type, choice_value)
elif existed2['is_choice']:
self._del_choice_values(attr.id, attr.value_type)
try:
db.session.commit()
except Exception as e:
db.session.rollback()
current_app.logger.error("update attribute error, {0}".format(str(e)))
return abort(400, ErrFormat.update_attribute_failed.format(("id=".format(_id))))
new = attr.to_dict()
if not new['choice_web_hook'] and new['is_choice']:
new['choice_value'] = choice_value
CITypeHistoryManager.add(CITypeOperateType.UPDATE_ATTRIBUTE, None, attr_id=attr.id,
change=dict(old=existed2, new=new))
AttributeCache.clean(attr)
self._clean_ci_type_attributes_cache(_id)
return attr.id
@staticmethod
def delete(_id):
attr = Attribute.get_by_id(_id) or abort(404, ErrFormat.attribute_not_found.format("id={}".format(_id)))
name = attr.name
if CIType.get_by(unique_id=attr.id, first=True, to_dict=False) is not None:
return abort(400, ErrFormat.attribute_is_unique_id)
ref = CITypeAttribute.get_by(attr_id=_id, to_dict=False, first=True)
if ref is not None:
ci_type = CITypeCache.get(ref.type_id)
return abort(400, ErrFormat.attribute_is_ref_by_type.format(ci_type and ci_type.alias or ref.type_id))
if attr.uid != current_user.uid and not is_app_admin('cmdb'):
return abort(403, ErrFormat.cannot_delete_attribute)
if attr.is_choice:
choice_table = ValueTypeMap.choice.get(attr.value_type)
choice_table.get_by(attr_id=_id, only_query=True).delete()
attr.soft_delete()
AttributeCache.clean(attr)
for i in PreferenceShowAttributes.get_by(attr_id=_id, to_dict=False):
i.soft_delete(commit=False)
for i in CITypeAttributeGroupItem.get_by(attr_id=_id, to_dict=False):
i.soft_delete(commit=False)
db.session.commit()
return name

View File

@ -0,0 +1 @@
# -*- coding:utf-8 -*-

View File

@ -0,0 +1,511 @@
# -*- coding:utf-8 -*-
import datetime
import json
import os
from flask import abort
from flask import current_app
from flask_login import current_user
from sqlalchemy import func
from api.extensions import db
from api.lib.cmdb.auto_discovery.const import ClOUD_MAP
from api.lib.cmdb.cache import CITypeAttributeCache
from api.lib.cmdb.cache import CITypeCache
from api.lib.cmdb.ci import CIManager
from api.lib.cmdb.ci import CIRelationManager
from api.lib.cmdb.ci_type import CITypeGroupManager
from api.lib.cmdb.const import AutoDiscoveryType
from api.lib.cmdb.const import PermEnum
from api.lib.cmdb.const import ResourceTypeEnum
from api.lib.cmdb.resp_format import ErrFormat
from api.lib.cmdb.search import SearchError
from api.lib.cmdb.search.ci import search
from api.lib.mixin import DBMixin
from api.lib.perm.acl.acl import is_app_admin
from api.lib.perm.acl.acl import validate_permission
from api.lib.utils import AESCrypto
from api.models.cmdb import AutoDiscoveryCI
from api.models.cmdb import AutoDiscoveryCIType
from api.models.cmdb import AutoDiscoveryRule
PWD = os.path.abspath(os.path.dirname(__file__))
def parse_plugin_script(script):
attributes = []
try:
x = compile(script, '', "exec")
exec(x)
unique_key = locals()['AutoDiscovery']().unique_key
attrs = locals()['AutoDiscovery']().attributes() or []
except Exception as e:
return abort(400, str(e))
if not isinstance(attrs, list):
return abort(400, ErrFormat.adr_plugin_attributes_list_required)
for i in attrs:
if len(i) == 3:
name, _type, desc = i
elif len(i) == 2:
name, _type = i
desc = ""
else:
continue
attributes.append(dict(name=name, type=_type, desc=desc))
return unique_key, attributes
def check_plugin_script(**kwargs):
kwargs['unique_key'], kwargs['attributes'] = parse_plugin_script(kwargs['plugin_script'])
if not kwargs.get('unique_key'):
return abort(400, ErrFormat.adr_unique_key_required)
if not kwargs.get('attributes'):
return abort(400, ErrFormat.adr_plugin_attributes_list_no_empty)
return kwargs
class AutoDiscoveryRuleCRUD(DBMixin):
cls = AutoDiscoveryRule
@classmethod
def get_by_name(cls, name):
return cls.cls.get_by(name=name, first=True, to_dict=False)
@classmethod
def get_by_id(cls, _id):
return cls.cls.get_by_id(_id)
def get_by_inner(self):
return self.cls.get_by(is_inner=True, to_dict=True)
def import_template(self, rules):
for rule in rules:
rule.pop("id", None)
rule.pop("created_at", None)
rule.pop("updated_at", None)
existed = self.cls.get_by(name=rule['name'], first=True, to_dict=False)
if existed is not None:
existed.update(**rule)
else:
self.cls.create(**rule)
def _can_add(self, **kwargs):
self.cls.get_by(name=kwargs['name']) and abort(400, ErrFormat.adr_duplicate.format(kwargs['name']))
if kwargs.get('is_plugin') and kwargs.get('plugin_script'):
kwargs = check_plugin_script(**kwargs)
return kwargs
def _can_update(self, **kwargs):
existed = self.cls.get_by_id(kwargs['_id']) or abort(
404, ErrFormat.adr_not_found.format("id={}".format(kwargs['_id'])))
if 'name' in kwargs and not kwargs['name']:
return abort(400, ErrFormat.argument_value_required.format('name'))
if kwargs.get('name'):
other = self.cls.get_by(name=kwargs['name'], first=True, to_dict=False)
if other and other.id != existed.id:
return abort(400, ErrFormat.adr_duplicate.format(kwargs['name']))
return existed
def update(self, _id, **kwargs):
if kwargs.get('is_plugin') and kwargs.get('plugin_script'):
kwargs = check_plugin_script(**kwargs)
return super(AutoDiscoveryRuleCRUD, self).update(_id, filter_none=False, **kwargs)
def _can_delete(self, **kwargs):
if AutoDiscoveryCIType.get_by(adr_id=kwargs['_id'], first=True):
return abort(400, ErrFormat.adr_referenced)
return self._can_update(**kwargs)
class AutoDiscoveryCITypeCRUD(DBMixin):
cls = AutoDiscoveryCIType
@classmethod
def get_all(cls):
return cls.cls.get_by(to_dict=False)
@classmethod
def get_by_id(cls, _id):
return cls.cls.get_by_id(_id)
@classmethod
def get_by_type_id(cls, type_id):
return cls.cls.get_by(type_id=type_id, to_dict=False)
@classmethod
def get(cls, ci_id, oneagent_id, last_update_at=None):
result = []
rules = cls.cls.get_by(to_dict=True)
for rule in rules:
if rule.get('relation'):
continue
if isinstance(rule.get("extra_option"), dict) and rule['extra_option'].get('secret'):
if not (current_user.username == "cmdb_agent" or current_user.uid == rule['uid']):
rule['extra_option'].pop('secret', None)
else:
rule['extra_option']['secret'] = AESCrypto.decrypt(rule['extra_option']['secret'])
if oneagent_id and rule['agent_id'] == oneagent_id:
result.append(rule)
elif rule['query_expr']:
query = rule['query_expr'].lstrip('q').lstrip('=')
s = search(query, fl=['_id'], count=1000000)
try:
response, _, _, _, _, _ = s.search()
except SearchError as e:
return abort(400, str(e))
for i in (response or []):
if i.get('_id') == ci_id:
result.append(rule)
break
elif not rule['agent_id'] and not rule['query_expr'] and rule['adr_id']:
adr = AutoDiscoveryRuleCRUD.get_by_id(rule['adr_id'])
if not adr:
continue
if adr.type in (AutoDiscoveryType.SNMP, AutoDiscoveryType.HTTP):
continue
if not rule['updated_at']:
continue
result.append(rule)
new_last_update_at = ""
for i in result:
i['adr'] = AutoDiscoveryRule.get_by_id(i['adr_id']).to_dict()
__last_update_at = max([i['updated_at'] or "", i['created_at'] or "",
i['adr']['created_at'] or "", i['adr']['updated_at'] or ""])
if new_last_update_at < __last_update_at:
new_last_update_at = __last_update_at
if not last_update_at or new_last_update_at > last_update_at:
return result, new_last_update_at
else:
return [], new_last_update_at
@staticmethod
def __valid_exec_target(agent_id, query_expr):
_is_app_admin = is_app_admin("cmdb")
if not agent_id and not query_expr and not _is_app_admin:
return abort(403, ErrFormat.adt_target_all_no_permission)
if _is_app_admin:
return
if agent_id and isinstance(agent_id, str) and agent_id.startswith("0x"):
agent_id = agent_id.strip()
q = "op_duty:{0},-rd_duty:{0},oneagent_id:{1}"
s = search(q.format(current_user.username, agent_id.strip()))
try:
response, _, _, _, _, _ = s.search()
if response:
return
except SearchError as e:
current_app.logger.warning(e)
return abort(400, str(e))
s = search(q.format(current_user.nickname, agent_id.strip()))
try:
response, _, _, _, _, _ = s.search()
if response:
return
except SearchError as e:
current_app.logger.warning(e)
return abort(400, str(e))
if query_expr.strip():
query_expr = query_expr.strip()
if query_expr.startswith('q='):
query_expr = query_expr[2:]
s = search(query_expr, count=1000000)
try:
response, _, _, _, _, _ = s.search()
for i in response:
if (current_user.username not in (i.get('rd_duty') or []) and
current_user.username not in (i.get('op_duty') or []) and
current_user.nickname not in (i.get('rd_duty') or []) and
current_user.nickname not in (i.get('op_duty') or [])):
return abort(403, ErrFormat.adt_target_expr_no_permission.format(
i.get("{}_name".format(i.get('ci_type')))))
except SearchError as e:
current_app.logger.warning(e)
return abort(400, str(e))
def _can_add(self, **kwargs):
self.cls.get_by(type_id=kwargs['type_id'], adr_id=kwargs.get('adr_id') or None) and abort(
400, ErrFormat.ad_duplicate)
# self.__valid_exec_target(kwargs.get('agent_id'), kwargs.get('query_expr'))
if kwargs.get('adr_id'):
adr = AutoDiscoveryRule.get_by_id(kwargs['adr_id']) or abort(
404, ErrFormat.adr_not_found.format("id={}".format(kwargs['adr_id'])))
if not adr.is_plugin:
other = self.cls.get_by(adr_id=adr.id, first=True, to_dict=False)
if other:
ci_type = CITypeCache.get(other.type_id)
return abort(400, ErrFormat.adr_default_ref_once.format(ci_type.alias))
if kwargs.get('is_plugin') and kwargs.get('plugin_script'):
kwargs = check_plugin_script(**kwargs)
if isinstance(kwargs.get('extra_option'), dict) and kwargs['extra_option'].get('secret'):
kwargs['extra_option']['secret'] = AESCrypto.encrypt(kwargs['extra_option']['secret'])
kwargs['uid'] = current_user.uid
return kwargs
def _can_update(self, **kwargs):
existed = self.cls.get_by_id(kwargs['_id']) or abort(
404, ErrFormat.ad_not_found.format("id={}".format(kwargs['_id'])))
self.__valid_exec_target(kwargs.get('agent_id'), kwargs.get('query_expr'))
if isinstance(kwargs.get('extra_option'), dict) and kwargs['extra_option'].get('secret'):
if current_user.uid != existed.uid:
return abort(403, ErrFormat.adt_secret_no_permission)
return existed
def update(self, _id, **kwargs):
if kwargs.get('is_plugin') and kwargs.get('plugin_script'):
kwargs = check_plugin_script(**kwargs)
if isinstance(kwargs.get('extra_option'), dict) and kwargs['extra_option'].get('secret'):
kwargs['extra_option']['secret'] = AESCrypto.encrypt(kwargs['extra_option']['secret'])
return super(AutoDiscoveryCITypeCRUD, self).update(_id, filter_none=False, **kwargs)
def _can_delete(self, **kwargs):
if AutoDiscoveryCICRUD.get_by_adt_id(kwargs['_id']):
return abort(400, ErrFormat.cannot_delete_adt)
existed = self.cls.get_by_id(kwargs['_id']) or abort(
404, ErrFormat.ad_not_found.format("id={}".format(kwargs['_id'])))
return existed
class AutoDiscoveryCICRUD(DBMixin):
cls = AutoDiscoveryCI
@classmethod
def get_by_adt_id(cls, adt_id):
return cls.cls.get_by(adt_id=adt_id, to_dict=False)
@classmethod
def get_type_name(cls, adc_id):
adc = cls.cls.get_by_id(adc_id) or abort(404, ErrFormat.adc_not_found)
ci_type = CITypeCache.get(adc.type_id)
return ci_type and ci_type.name
@staticmethod
def get_ci_types(need_other):
result = CITypeGroupManager.get(need_other, False)
adt = {i.type_id for i in AutoDiscoveryCITypeCRUD.get_all()}
for item in result:
item['ci_types'] = [i for i in (item.get('ci_types') or []) if i['id'] in adt]
return result
@staticmethod
def get_attributes_by_type_id(type_id):
from api.lib.cmdb.cache import CITypeAttributesCache
attributes = [i[1] for i in CITypeAttributesCache.get2(type_id) or []]
attr_names = set()
adts = AutoDiscoveryCITypeCRUD.get_by_type_id(type_id)
for adt in adts:
attr_names |= set((adt.attributes or {}).values())
return [attr.to_dict() for attr in attributes if attr.name in attr_names]
@classmethod
def search(cls, page, page_size, fl=None, **kwargs):
type_id = kwargs['type_id']
adts = AutoDiscoveryCITypeCRUD.get_by_type_id(type_id)
if not adts:
return 0, []
adt2attr_map = {i.id: i.attributes or {} for i in adts}
query = db.session.query(cls.cls).filter(cls.cls.deleted.is_(False))
count_query = db.session.query(func.count(cls.cls.id)).filter(cls.cls.deleted.is_(False))
for k in kwargs:
if hasattr(cls.cls, k):
query = query.filter(getattr(cls.cls, k) == kwargs[k])
count_query = count_query.filter(getattr(cls.cls, k) == kwargs[k])
query = query.order_by(cls.cls.is_accept.desc()).order_by(cls.cls.id.desc())
result = []
for i in query.offset((page - 1) * page_size).limit(page_size):
item = i.to_dict()
adt_id = item['adt_id']
item['instance'] = {adt2attr_map[adt_id][k]: v for k, v in item.get('instance').items()
if (not fl or k in fl) and adt2attr_map.get(adt_id, {}).get(k)}
result.append(item)
numfound = query.count()
return numfound, result
@staticmethod
def _get_unique_key(type_id):
ci_type = CITypeCache.get(type_id)
if ci_type:
attr = CITypeAttributeCache.get(type_id, ci_type.unique_id)
return attr and attr.name
def _can_add(self, **kwargs):
pass
def upsert(self, **kwargs):
adt = AutoDiscoveryCIType.get_by_id(kwargs['adt_id']) or abort(404, ErrFormat.adt_not_found)
existed = self.cls.get_by(type_id=kwargs['type_id'],
unique_value=kwargs.get("unique_value"),
first=True, to_dict=False)
changed = False
if existed is not None:
if existed.instance != kwargs['instance']:
existed.update(filter_none=False, **kwargs)
changed = True
else:
existed = self.cls.create(**kwargs)
changed = True
if adt.auto_accept and changed:
try:
self.accept(existed)
except Exception as e:
return abort(400, str(e))
elif changed:
existed.update(is_accept=False, accept_time=None, accept_by=None, filter_none=False)
return existed
def _can_update(self, **kwargs):
existed = self.cls.get_by_id(kwargs['_id']) or abort(404, ErrFormat.adc_not_found)
return existed
def _can_delete(self, **kwargs):
return self._can_update(**kwargs)
def delete(self, _id):
inst = self._can_delete(_id=_id)
inst.delete()
self._after_delete(inst)
return inst
@classmethod
def delete2(cls, type_id, unique_value):
existed = cls.cls.get_by(type_id=type_id, unique_value=unique_value, first=True, to_dict=False) or abort(
404, ErrFormat.adc_not_found)
if current_app.config.get("USE_ACL"):
ci_type = CITypeCache.get(type_id) or abort(404, ErrFormat.ci_type_not_found)
not is_app_admin("cmdb") and validate_permission(ci_type.name, ResourceTypeEnum.CI, PermEnum.DELETE, "cmdb")
existed.delete()
# TODO: delete ci
@classmethod
def accept(cls, adc, adc_id=None, nickname=None):
if adc_id is not None:
adc = cls.cls.get_by_id(adc_id) or abort(404, ErrFormat.adc_not_found)
adt = AutoDiscoveryCITypeCRUD.get_by_id(adc.adt_id) or abort(404, ErrFormat.adt_not_found)
ci_id = None
if adt.attributes:
ci_dict = {adt.attributes[k]: v for k, v in adc.instance.items() if k in adt.attributes}
ci_id = CIManager.add(adc.type_id, is_auto_discovery=True, **ci_dict)
relation_adts = AutoDiscoveryCIType.get_by(type_id=adt.type_id, adr_id=None, to_dict=False)
for r_adt in relation_adts:
if not r_adt.relation or ci_id is None:
continue
for ad_key in r_adt.relation:
if not adc.instance.get(ad_key):
continue
cmdb_key = r_adt.relation[ad_key]
query = "_type:{},{}:{}".format(cmdb_key.get('type_name'), cmdb_key.get('attr_name'),
adc.instance.get(ad_key))
s = search(query)
try:
response, _, _, _, _, _ = s.search()
except SearchError as e:
current_app.logger.warning(e)
return abort(400, str(e))
relation_ci_id = response and response[0]['_id']
if relation_ci_id:
try:
CIRelationManager.add(ci_id, relation_ci_id)
except:
try:
CIRelationManager.add(relation_ci_id, ci_id)
except:
pass
adc.update(is_accept=True,
accept_by=nickname or current_user.nickname,
accept_time=datetime.datetime.now(),
ci_id=ci_id)
class AutoDiscoveryHTTPManager(object):
@staticmethod
def get_categories(name):
return (ClOUD_MAP.get(name) or {}).get('categories') or []
@staticmethod
def get_attributes(name, category):
tpt = ((ClOUD_MAP.get(name) or {}).get('map') or {}).get(category)
if tpt and os.path.exists(os.path.join(PWD, tpt)):
with open(os.path.join(PWD, tpt)) as f:
return json.loads(f.read())
return []
class AutoDiscoverySNMPManager(object):
@staticmethod
def get_attributes():
if os.path.exists(os.path.join(PWD, "templates/net_device.json")):
with open(os.path.join(PWD, "templates/net_device.json")) as f:
return json.loads(f.read())
return []

View File

@ -0,0 +1,53 @@
# -*- coding:utf-8 -*-
from api.lib.cmdb.const import AutoDiscoveryType
DEFAULT_HTTP = [
dict(name="阿里云", type=AutoDiscoveryType.HTTP, is_inner=True, is_plugin=False,
option={'icon': {'name': 'caise-aliyun'}}),
dict(name="腾讯云", type=AutoDiscoveryType.HTTP, is_inner=True, is_plugin=False,
option={'icon': {'name': 'caise-tengxunyun'}}),
dict(name="华为云", type=AutoDiscoveryType.HTTP, is_inner=True, is_plugin=False,
option={'icon': {'name': 'caise-huaweiyun'}}),
dict(name="AWS", type=AutoDiscoveryType.HTTP, is_inner=True, is_plugin=False,
option={'icon': {'name': 'caise-aws'}}),
dict(name="交换机", type=AutoDiscoveryType.SNMP, is_inner=True, is_plugin=False,
option={'icon': {'name': 'caise-jiaohuanji'}}),
dict(name="路由器", type=AutoDiscoveryType.SNMP, is_inner=True, is_plugin=False,
option={'icon': {'name': 'caise-luyouqi'}}),
dict(name="防火墙", type=AutoDiscoveryType.SNMP, is_inner=True, is_plugin=False,
option={'icon': {'name': 'caise-fanghuoqiang'}}),
dict(name="打印机", type=AutoDiscoveryType.SNMP, is_inner=True, is_plugin=False,
option={'icon': {'name': 'caise-dayinji'}}),
]
ClOUD_MAP = {
"aliyun": {
"categories": ["云服务器 ECS"],
"map": {
"云服务器 ECS": "templates/aliyun_ecs.json",
}
},
"tencentcloud": {
"categories": ["云服务器 CVM"],
"map": {
"云服务器 CVM": "templates/tencent_cvm.json",
}
},
"huaweicloud": {
"categories": ["云服务器 ECS"],
"map": {
"云服务器 ECS": "templates/huaweicloud_ecs.json",
}
},
"aws": {
"categories": ["云服务器 EC2"],
"map": {
"云服务器 EC2": "templates/aws_ec2.json",
}
},
}

View File

@ -0,0 +1,647 @@
[
{
"name": "CreationTime",
"type": "文本",
"example": "2017-12-10T04:04Z",
"desc": "\u5b9e\u4f8b\u521b\u5efa\u65f6\u95f4\u3002\u4ee5ISO 8601\u4e3a\u6807\u51c6\uff0c\u5e76\u4f7f\u7528UTC+0\u65f6\u95f4\uff0c\u683c\u5f0f\u4e3ayyyy-MM-ddTHH:mmZ\u3002\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u89c1[ISO 8601](~~25696~~)\u3002"
},
{
"name": "SerialNumber",
"type": "文本",
"example": "51d1353b-22bf-4567-a176-8b3e12e4****",
"desc": "\u5b9e\u4f8b\u5e8f\u5217\u53f7\u3002"
},
{
"name": "Status",
"type": "文本",
"example": "Running",
"desc": "\u5b9e\u4f8b\u72b6\u6001\u3002"
},
{
"name": "DeploymentSetId",
"type": "文本",
"example": "ds-bp67acfmxazb4p****",
"desc": "\u90e8\u7f72\u96c6ID\u3002"
},
{
"name": "KeyPairName",
"type": "文本",
"example": "testKeyPairName",
"desc": "\u5bc6\u94a5\u5bf9\u540d\u79f0\u3002"
},
{
"name": "SaleCycle",
"type": "文本",
"example": "month",
"desc": "> \u8be5\u53c2\u6570\u5df2\u5f03\u7528\uff0c\u4e0d\u518d\u8fd4\u56de\u6709\u610f\u4e49\u7684\u6570\u636e\u3002"
},
{
"name": "SpotStrategy",
"type": "文本",
"example": "NoSpot",
"desc": "\u6309\u91cf\u5b9e\u4f8b\u7684\u7ade\u4ef7\u7b56\u7565\u3002\u53ef\u80fd\u503c\uff1a\n\n- NoSpot\uff1a\u6b63\u5e38\u6309\u91cf\u4ed8\u8d39\u5b9e\u4f8b\u3002\n- SpotWithPriceLimit\uff1a\u8bbe\u7f6e\u4e0a\u9650\u4ef7\u683c\u7684\u62a2\u5360\u5f0f\u5b9e\u4f8b\u3002\n- SpotAsPriceGo\uff1a\u7cfb\u7edf\u81ea\u52a8\u51fa\u4ef7\uff0c\u6700\u9ad8\u6309\u91cf\u4ed8\u8d39\u4ef7\u683c\u7684\u62a2\u5360\u5f0f\u5b9e\u4f8b\u3002"
},
{
"name": "DeviceAvailable",
"type": "boolean",
"example": "true",
"desc": "\u5b9e\u4f8b\u662f\u5426\u53ef\u4ee5\u6302\u8f7d\u6570\u636e\u76d8\u3002"
},
{
"name": "LocalStorageCapacity",
"type": "整数",
"example": "1000",
"desc": "\u5b9e\u4f8b\u6302\u8f7d\u7684\u672c\u5730\u5b58\u50a8\u5bb9\u91cf\u3002"
},
{
"name": "Description",
"type": "文本",
"example": "testDescription",
"desc": "\u5b9e\u4f8b\u63cf\u8ff0\u3002"
},
{
"name": "SpotDuration",
"type": "整数",
"example": "1",
"desc": "\u62a2\u5360\u5f0f\u5b9e\u4f8b\u7684\u4fdd\u7559\u65f6\u957f\uff0c\u5355\u4f4d\u4e3a\u5c0f\u65f6\u3002\u53ef\u80fd\u503c\u4e3a0~6\u3002\n\n- \u4fdd\u7559\u65f6\u957f2~6\u6b63\u5728\u9080\u6d4b\u4e2d\uff0c\u5982\u9700\u5f00\u901a\u8bf7\u63d0\u4ea4\u5de5\u5355\u3002\n- \u503c\u4e3a0\uff0c\u5219\u4e3a\u65e0\u4fdd\u62a4\u671f\u6a21\u5f0f\u3002\n\n>\u5f53SpotStrategy\u503c\u4e3aSpotWithPriceLimit\u6216SpotAsPriceGo\u65f6\u8fd4\u56de\u8be5\u53c2\u6570\u3002"
},
{
"name": "InstanceNetworkType",
"type": "文本",
"example": "vpc",
"desc": "\u5b9e\u4f8b\u7f51\u7edc\u7c7b\u578b\u3002\u53ef\u80fd\u503c\uff1a\n\n- classic\uff1a\u7ecf\u5178\u7f51\u7edc\u3002\n- vpc\uff1a\u4e13\u6709\u7f51\u7edcVPC\u3002"
},
{
"name": "InstanceName",
"type": "文本",
"example": "InstanceNameTest",
"desc": "\u5b9e\u4f8b\u540d\u79f0\u3002"
},
{
"name": "OSNameEn",
"type": "文本",
"example": "CentOS 7.4 64 bit",
"desc": "\u5b9e\u4f8b\u64cd\u4f5c\u7cfb\u7edf\u7684\u82f1\u6587\u540d\u79f0\u3002"
},
{
"name": "HpcClusterId",
"type": "文本",
"example": "hpc-bp67acfmxazb4p****",
"desc": "\u5b9e\u4f8b\u6240\u5c5e\u7684HPC\u96c6\u7fa4ID\u3002"
},
{
"name": "SpotPriceLimit",
"type": "float",
"example": "0.98",
"desc": "\u5b9e\u4f8b\u7684\u6bcf\u5c0f\u65f6\u6700\u9ad8\u4ef7\u683c\u3002\u652f\u6301\u6700\u59273\u4f4d\u5c0f\u6570\uff0c\u53c2\u6570SpotStrategy=SpotWithPriceLimit\u65f6\uff0c\u8be5\u53c2\u6570\u751f\u6548\u3002"
},
{
"name": "Memory",
"type": "整数",
"example": "16384",
"desc": "\u5185\u5b58\u5927\u5c0f\uff0c\u5355\u4f4d\u4e3aMiB\u3002"
},
{
"name": "OSName",
"type": "文本",
"example": "CentOS 7.4 64 \u4f4d",
"desc": "\u5b9e\u4f8b\u7684\u64cd\u4f5c\u7cfb\u7edf\u540d\u79f0\u3002"
},
{
"name": "DeploymentSetGroupNo",
"type": "整数",
"example": "1",
"desc": "ECS\u5b9e\u4f8b\u7ed1\u5b9a\u90e8\u7f72\u96c6\u5206\u6563\u90e8\u7f72\u65f6\uff0c\u5b9e\u4f8b\u5728\u90e8\u7f72\u96c6\u4e2d\u7684\u5206\u7ec4\u4f4d\u7f6e\u3002"
},
{
"name": "ImageId",
"type": "文本",
"example": "m-bp67acfmxazb4p****",
"desc": "\u5b9e\u4f8b\u8fd0\u884c\u7684\u955c\u50cfID\u3002"
},
{
"name": "VlanId",
"type": "文本",
"example": "10",
"desc": "\u5b9e\u4f8b\u7684VLAN ID\u3002\n\n>\u8be5\u53c2\u6570\u5373\u5c06\u88ab\u5f03\u7528\uff0c\u4e3a\u63d0\u9ad8\u517c\u5bb9\u6027\uff0c\u8bf7\u5c3d\u91cf\u4f7f\u7528\u5176\u4ed6\u53c2\u6570\u3002"
},
{
"name": "ClusterId",
"type": "文本",
"example": "c-bp67acfmxazb4p****",
"desc": "\u5b9e\u4f8b\u6240\u5728\u7684\u96c6\u7fa4ID\u3002\n\n>\u8be5\u53c2\u6570\u5373\u5c06\u88ab\u5f03\u7528\uff0c\u4e3a\u63d0\u9ad8\u517c\u5bb9\u6027\uff0c\u8bf7\u5c3d\u91cf\u4f7f\u7528\u5176\u4ed6\u53c2\u6570\u3002"
},
{
"name": "GPUSpec",
"type": "文本",
"example": "NVIDIA V100",
"desc": "\u5b9e\u4f8b\u89c4\u683c\u9644\u5e26\u7684GPU\u7c7b\u578b\u3002"
},
{
"name": "AutoReleaseTime",
"type": "文本",
"example": "2017-12-10T04:04Z",
"desc": "\u6309\u91cf\u4ed8\u8d39\u5b9e\u4f8b\u7684\u81ea\u52a8\u91ca\u653e\u65f6\u95f4\u3002"
},
{
"name": "DeletionProtection",
"type": "boolean",
"example": "false",
"desc": "\u5b9e\u4f8b\u91ca\u653e\u4fdd\u62a4\u5c5e\u6027\uff0c\u6307\u5b9a\u662f\u5426\u652f\u6301\u901a\u8fc7\u63a7\u5236\u53f0\u6216API\uff08DeleteInstance\uff09\u91ca\u653e\u5b9e\u4f8b\u3002\n\n- true\uff1a\u5df2\u5f00\u542f\u5b9e\u4f8b\u91ca\u653e\u4fdd\u62a4\u3002\n- false\uff1a\u672a\u5f00\u542f\u5b9e\u4f8b\u91ca\u653e\u4fdd\u62a4\u3002\n\n> \u8be5\u5c5e\u6027\u4ec5\u9002\u7528\u4e8e\u6309\u91cf\u4ed8\u8d39\u5b9e\u4f8b\uff0c\u4e14\u53ea\u80fd\u9650\u5236\u624b\u52a8\u91ca\u653e\u64cd\u4f5c\uff0c\u5bf9\u7cfb\u7edf\u91ca\u653e\u64cd\u4f5c\u4e0d\u751f\u6548\u3002"
},
{
"name": "StoppedMode",
"type": "文本",
"example": "KeepCharging",
"desc": "\u5b9e\u4f8b\u505c\u673a\u540e\u662f\u5426\u7ee7\u7eed\u6536\u8d39\u3002\u53ef\u80fd\u503c\uff1a\n\n- KeepCharging\uff1a\u505c\u673a\u540e\u7ee7\u7eed\u6536\u8d39\uff0c\u4e3a\u60a8\u7ee7\u7eed\u4fdd\u7559\u5e93\u5b58\u8d44\u6e90\u3002\n- StopCharging\uff1a\u505c\u673a\u540e\u4e0d\u6536\u8d39\u3002\u505c\u673a\u540e\uff0c\u6211\u4eec\u91ca\u653e\u5b9e\u4f8b\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u4f8b\u5982vCPU\u3001\u5185\u5b58\u548c\u516c\u7f51IP\u7b49\u8d44\u6e90\u3002\u91cd\u542f\u662f\u5426\u6210\u529f\u4f9d\u8d56\u4e8e\u5f53\u524d\u5730\u57df\u4e2d\u662f\u5426\u4ecd\u6709\u8d44\u6e90\u5e93\u5b58\u3002\n- Not-applicable\uff1a\u672c\u5b9e\u4f8b\u4e0d\u652f\u6301\u505c\u673a\u4e0d\u6536\u8d39\u529f\u80fd\u3002"
},
{
"name": "GPUAmount",
"type": "整数",
"example": "4",
"desc": "\u5b9e\u4f8b\u89c4\u683c\u9644\u5e26\u7684GPU\u6570\u91cf\u3002"
},
{
"name": "HostName",
"type": "文本",
"example": "testHostName",
"desc": "\u5b9e\u4f8b\u4e3b\u673a\u540d\u3002"
},
{
"name": "InstanceId",
"type": "文本",
"example": "i-bp67acfmxazb4p****",
"desc": "\u5b9e\u4f8bID\u3002"
},
{
"name": "InternetMaxBandwidthOut",
"type": "整数",
"example": "5",
"desc": "\u516c\u7f51\u51fa\u5e26\u5bbd\u6700\u5927\u503c\uff0c\u5355\u4f4d\u4e3aMbit/s\u3002"
},
{
"name": "InternetMaxBandwidthIn",
"type": "整数",
"example": "50",
"desc": "\u516c\u7f51\u5165\u5e26\u5bbd\u6700\u5927\u503c\uff0c\u5355\u4f4d\u4e3aMbit/s\u3002"
},
{
"name": "InstanceType",
"type": "文本",
"example": "ecs.g5.large",
"desc": "\u5b9e\u4f8b\u89c4\u683c\u3002"
},
{
"name": "InstanceChargeType",
"type": "文本",
"example": "PostPaid",
"desc": "\u5b9e\u4f8b\u7684\u8ba1\u8d39\u65b9\u5f0f\u3002\u53ef\u80fd\u503c\uff1a\n\n- PrePaid\uff1a\u5305\u5e74\u5305\u6708\u3002\n- PostPaid\uff1a\u6309\u91cf\u4ed8\u8d39\u3002"
},
{
"name": "RegionId",
"type": "文本",
"example": "cn-hangzhou",
"desc": "\u5b9e\u4f8b\u6240\u5c5e\u5730\u57dfID\u3002"
},
{
"name": "IoOptimized",
"type": "boolean",
"example": "true",
"desc": "\u662f\u5426\u4e3aI/O\u4f18\u5316\u578b\u5b9e\u4f8b\u3002"
},
{
"name": "StartTime",
"type": "文本",
"example": "2017-12-10T04:04Z",
"desc": "\u5b9e\u4f8b\u6700\u8fd1\u4e00\u6b21\u7684\u542f\u52a8\u65f6\u95f4\u3002\u4ee5ISO8601\u4e3a\u6807\u51c6\uff0c\u5e76\u4f7f\u7528UTC+0\u65f6\u95f4\uff0c\u683c\u5f0f\u4e3ayyyy-MM-ddTHH:mmZ\u3002\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u89c1[ISO8601](~~25696~~)\u3002"
},
{
"name": "Cpu",
"type": "整数",
"example": "8",
"desc": "vCPU\u6570\u3002"
},
{
"name": "LocalStorageAmount",
"type": "整数",
"example": "2",
"desc": "\u5b9e\u4f8b\u6302\u8f7d\u7684\u672c\u5730\u5b58\u50a8\u6570\u91cf\u3002"
},
{
"name": "ExpiredTime",
"type": "文本",
"example": "2017-12-10T04:04Z",
"desc": "\u8fc7\u671f\u65f6\u95f4\u3002\u4ee5ISO8601\u4e3a\u6807\u51c6\uff0c\u5e76\u4f7f\u7528UTC+0\u65f6\u95f4\uff0c\u683c\u5f0f\u4e3ayyyy-MM-ddTHH:mmZ\u3002\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u89c1[ISO8601](~~25696~~)\u3002"
},
{
"name": "ResourceGroupId",
"type": "文本",
"example": "rg-bp67acfmxazb4p****",
"desc": "\u5b9e\u4f8b\u6240\u5c5e\u7684\u4f01\u4e1a\u8d44\u6e90\u7ec4ID\u3002"
},
{
"name": "InternetChargeType",
"type": "文本",
"example": "PayByTraffic",
"desc": "\u7f51\u7edc\u8ba1\u8d39\u7c7b\u578b\u3002\u53ef\u80fd\u503c\uff1a\n\n- PayByBandwidth\uff1a\u6309\u56fa\u5b9a\u5e26\u5bbd\u8ba1\u8d39\u3002\n- PayByTraffic\uff1a\u6309\u4f7f\u7528\u6d41\u91cf\u8ba1\u8d39\u3002"
},
{
"name": "ZoneId",
"type": "文本",
"example": "cn-hangzhou-g",
"desc": "\u5b9e\u4f8b\u6240\u5c5e\u53ef\u7528\u533a\u3002"
},
{
"name": "Recyclable",
"type": "boolean",
"example": "false",
"desc": "\u5b9e\u4f8b\u662f\u5426\u53ef\u4ee5\u56de\u6536\u3002"
},
{
"name": "ISP",
"type": "文本",
"example": "null",
"desc": "> \u8be5\u53c2\u6570\u6b63\u5728\u9080\u6d4b\u4e2d\uff0c\u6682\u672a\u5f00\u653e\u4f7f\u7528\u3002"
},
{
"name": "CreditSpecification",
"type": "文本",
"example": "Standard",
"desc": "\u4fee\u6539\u7a81\u53d1\u6027\u80fd\u5b9e\u4f8b\u7684\u8fd0\u884c\u6a21\u5f0f\u3002\u53ef\u80fd\u503c\uff1a\n\n- Standard\uff1a\u6807\u51c6\u6a21\u5f0f\u3002\u6709\u5173\u5b9e\u4f8b\u6027\u80fd\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u89c1[\u4ec0\u4e48\u662f\u7a81\u53d1\u6027\u80fd\u5b9e\u4f8b](~~59977~~)\u4e2d\u7684\u6027\u80fd\u7ea6\u675f\u6a21\u5f0f\u7ae0\u8282\u3002\n- Unlimited\uff1a\u65e0\u6027\u80fd\u7ea6\u675f\u6a21\u5f0f\uff0c\u6709\u5173\u5b9e\u4f8b\u6027\u80fd\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u89c1[\u4ec0\u4e48\u662f\u7a81\u53d1\u6027\u80fd\u5b9e\u4f8b](~~59977~~)\u4e2d\u7684\u65e0\u6027\u80fd\u7ea6\u675f\u6a21\u5f0f\u7ae0\u8282\u3002"
},
{
"name": "InstanceTypeFamily",
"type": "文本",
"example": "ecs.g5",
"desc": "\u5b9e\u4f8b\u89c4\u683c\u65cf\u3002"
},
{
"name": "OSType",
"type": "文本",
"example": "linux",
"desc": "\u5b9e\u4f8b\u7684\u64cd\u4f5c\u7cfb\u7edf\u7c7b\u578b\uff0c\u5206\u4e3aWindows Server\u548cLinux\u4e24\u79cd\u3002\u53ef\u80fd\u503c\uff1a\n\n- windows\u3002\n- linux\u3002"
},
{
"name": "NetworkInterfaces",
"type": "json",
"example": {
"type": "json",
"properties": {
"Type": {
"description": "\u5f39\u6027\u7f51\u5361\u7c7b\u578b\u3002\u53ef\u80fd\u503c\uff1a\n- Primary\uff1a\u4e3b\u7f51\u5361\u3002\n- Secondary\uff1a\u8f85\u52a9\u5f39\u6027\u7f51\u5361\u3002",
"type": "文本",
"example": "Primary"
},
"MacAddress": {
"description": "\u5f39\u6027\u7f51\u5361\u7684MAC\u5730\u5740\u3002",
"type": "文本",
"example": "00:16:3e:32:b4:**"
},
"PrimaryIpAddress": {
"description": "\u5f39\u6027\u7f51\u5361\u4e3b\u79c1\u6709IP\u5730\u5740\u3002",
"type": "文本",
"example": "172.17.**.***"
},
"NetworkInterfaceId": {
"description": "\u5f39\u6027\u7f51\u5361\u7684ID\u3002",
"type": "文本",
"example": "eni-2zeh9atclduxvf1z****"
},
"PrivateIpSets": {
"type": "array",
"items": {
"type": "json",
"properties": {
"PrivateIpAddress": {
"description": "\u5b9e\u4f8b\u7684\u79c1\u7f51IP\u5730\u5740\u3002",
"type": "文本",
"example": "172.17.**.**"
},
"Primary": {
"description": "\u662f\u5426\u662f\u4e3b\u79c1\u7f51IP\u5730\u5740\u3002",
"type": "boolean",
"example": "true"
}
}
},
"description": "PrivateIpSet\u7ec4\u6210\u7684\u96c6\u5408\u3002"
},
"Ipv6Sets": {
"type": "array",
"items": {
"type": "json",
"properties": {
"Ipv6Address": {
"description": "\u4e3a\u5f39\u6027\u7f51\u5361\u6307\u5b9a\u7684IPv6\u5730\u5740\u3002",
"type": "文本",
"example": "2408:4321:180:1701:94c7:bc38:3bfa:***"
}
}
},
"description": "\u4e3a\u5f39\u6027\u7f51\u5361\u5206\u914d\u7684IPv6\u5730\u5740\u96c6\u5408\u3002\u4ec5\u5f53\u8bf7\u6c42\u53c2\u6570`AdditionalAttributes.N`\u53d6\u503c\u4e3a`NETWORK_PRIMARY_ENI_IP`\u65f6\uff0c\u624d\u4f1a\u8fd4\u56de\u8be5\u53c2\u6570\u503c\u3002"
},
"Ipv4PrefixSets": {
"type": "array",
"items": {
"type": "json",
"properties": {
"Ipv4Prefix": {
"description": "IPv4\u524d\u7f00\u3002",
"type": "文本",
"example": "47.122.*.*/19"
}
}
},
"description": "IPv4\u524d\u7f00\u96c6\u5408\u3002"
},
"Ipv6PrefixSets": {
"type": "array",
"items": {
"type": "json",
"properties": {
"Ipv6Prefix": {
"description": "IPv6\u524d\u7f00\u3002",
"type": "文本",
"example": "2001:1111:*:*::/64"
}
}
},
"description": "IPv6\u524d\u7f00\u96c6\u5408\u3002"
}
},
"description": "\u5b9e\u4f8b\u5305\u542b\u7684\u5f39\u6027\u7f51\u5361\u96c6\u5408\u3002"
},
"desc": "\u5b9e\u4f8b\u5305\u542b\u7684\u5f39\u6027\u7f51\u5361\u96c6\u5408\u3002"
},
{
"name": "OperationLocks",
"type": "文本、多值",
"example": {
"type": "json",
"properties": {
"LockMsg": {
"description": "\u5b9e\u4f8b\u88ab\u9501\u5b9a\u7684\u63cf\u8ff0\u4fe1\u606f\u3002",
"type": "文本",
"example": "The specified instance is locked due to financial reason."
},
"LockReason": {
"description": "\u9501\u5b9a\u7c7b\u578b\u3002\u53ef\u80fd\u503c\uff1a\n\n- financial\uff1a\u56e0\u6b20\u8d39\u88ab\u9501\u5b9a\u3002\n- security\uff1a\u56e0\u5b89\u5168\u539f\u56e0\u88ab\u9501\u5b9a\u3002\n- Recycling\uff1a\u62a2\u5360\u5f0f\u5b9e\u4f8b\u7684\u5f85\u91ca\u653e\u9501\u5b9a\u72b6\u6001\u3002\n- dedicatedhostfinancial\uff1a\u56e0\u4e3a\u4e13\u6709\u5bbf\u4e3b\u673a\u6b20\u8d39\u5bfc\u81f4ECS\u5b9e\u4f8b\u88ab\u9501\u5b9a\u3002\n- refunded\uff1a\u56e0\u9000\u6b3e\u88ab\u9501\u5b9a\u3002",
"type": "文本",
"example": "Recycling"
}
}
},
"desc": "\u5b9e\u4f8b\u7684\u9501\u5b9a\u539f\u56e0\u3002"
},
{
"name": "Tags",
"type": "json",
"example": {
"type": "json",
"properties": {
"TagValue": {
"description": "\u5b9e\u4f8b\u7684\u6807\u7b7e\u503c\u3002",
"type": "文本",
"example": "TestValue"
},
"TagKey": {
"description": "\u5b9e\u4f8b\u7684\u6807\u7b7e\u952e\u3002",
"type": "文本",
"example": "TestKey"
}
}
},
"desc": "\u5b9e\u4f8b\u7684\u6807\u7b7e\u96c6\u5408\u3002"
},
{
"name": "RdmaIpAddress",
"type": "文本、多值",
"example": {
"description": "HPC\u5b9e\u4f8b\u7684Rdma\u7f51\u7edcIP\u3002",
"type": "文本",
"example": "10.10.10.102"
},
"desc": "HPC\u5b9e\u4f8b\u7684Rdma\u7f51\u7edcIP\u5217\u8868\u3002"
},
{
"name": "SecurityGroupIds",
"type": "文本、多值",
"example": {
"description": "\u5b89\u5168\u7ec4ID\u3002",
"type": "文本",
"example": "sg-bp67acfmxazb4p****"
},
"desc": "\u5b9e\u4f8b\u6240\u5c5e\u5b89\u5168\u7ec4ID\u5217\u8868\u3002"
},
{
"name": "PublicIpAddress",
"type": "文本、多值",
"example": {
"description": "\u5b9e\u4f8b\u516c\u7f51IP\u5730\u5740\u3002",
"type": "文本",
"example": "121.40.**.**"
},
"desc": "\u5b9e\u4f8b\u516c\u7f51IP\u5730\u5740\u5217\u8868\u3002"
},
{
"name": "InnerIpAddress",
"type": "文本、多值",
"example": {
"description": "\u7ecf\u5178\u7f51\u7edc\u7c7b\u578b\u5b9e\u4f8b\u7684\u5185\u7f51IP\u5730\u5740\u3002",
"type": "文本",
"example": "10.170.**.**"
},
"desc": "\u7ecf\u5178\u7f51\u7edc\u7c7b\u578b\u5b9e\u4f8b\u7684\u5185\u7f51IP\u5730\u5740\u5217\u8868\u3002"
},
{
"name": "VpcAttributes",
"type": "json",
"example": {
"VpcId": {
"description": "\u4e13\u6709\u7f51\u7edcVPC ID\u3002",
"type": "文本",
"example": "vpc-2zeuphj08tt7q3brd****"
},
"NatIpAddress": {
"description": "\u4e91\u4ea7\u54c1\u7684IP\uff0c\u7528\u4e8eVPC\u4e91\u4ea7\u54c1\u4e4b\u95f4\u7684\u7f51\u7edc\u4e92\u901a\u3002",
"type": "文本",
"example": "172.17.**.**"
},
"VSwitchId": {
"description": "\u865a\u62df\u4ea4\u6362\u673aID\u3002",
"type": "文本",
"example": "vsw-2zeh0r1pabwtg6wcs****"
},
"PrivateIpAddress": {
"type": "array",
"items": {
"description": "\u79c1\u6709IP\u5730\u5740\u3002",
"type": "文本",
"example": "172.17.**.**"
},
"description": "\u79c1\u6709IP\u5730\u5740\u5217\u8868\u3002"
}
},
"desc": "\u4e13\u6709\u7f51\u7edcVPC\u5c5e\u6027\u3002"
},
{
"name": "EipAddress",
"type": "json",
"example": {
"IsSupportUnassociate": {
"description": "\u662f\u5426\u53ef\u4ee5\u89e3\u7ed1\u5f39\u6027\u516c\u7f51IP\u3002",
"type": "boolean",
"example": "true"
},
"InternetChargeType": {
"description": "\u5f39\u6027\u516c\u7f51IP\u7684\u8ba1\u8d39\u65b9\u5f0f\u3002\n\n- PayByBandwidth\uff1a\u6309\u5e26\u5bbd\u8ba1\u8d39\u3002\n\n- PayByTraffic\uff1a\u6309\u6d41\u91cf\u8ba1\u8d39\u3002",
"type": "文本",
"example": "PayByTraffic"
},
"IpAddress": {
"description": "\u5f39\u6027\u516c\u7f51IP\u3002",
"type": "文本",
"example": "42.112.**.**"
},
"Bandwidth": {
"description": "\u5f39\u6027\u516c\u7f51IP\u7684\u516c\u7f51\u5e26\u5bbd\u9650\u901f\uff0c\u5355\u4f4d\u4e3aMbit/s\u3002",
"type": "整数",
"format": "int32",
"example": "5"
},
"AllocationId": {
"description": "\u5f39\u6027\u516c\u7f51IP\u7684ID\u3002",
"type": "文本",
"example": "eip-2ze88m67qx5z****"
}
},
"desc": "\u5f39\u6027\u516c\u7f51IP\u7ed1\u5b9a\u4fe1\u606f\u3002"
},
{
"name": "HibernationOptions",
"type": "json",
"example": {
"Configured": {
"description": "> \u8be5\u53c2\u6570\u6b63\u5728\u9080\u6d4b\u4e2d\uff0c\u6682\u672a\u5f00\u653e\u4f7f\u7528\u3002",
"type": "boolean",
"example": "false"
}
},
"desc": "> \u8be5\u53c2\u6570\u6b63\u5728\u9080\u6d4b\u4e2d\uff0c\u6682\u672a\u5f00\u653e\u4f7f\u7528\u3002"
},
{
"name": "DedicatedHostAttribute",
"type": "json",
"example": {
"DedicatedHostId": {
"description": "\u4e13\u6709\u5bbf\u4e3b\u673aID\u3002",
"type": "文本",
"example": "dh-bp67acfmxazb4p****"
},
"DedicatedHostName": {
"description": "\u4e13\u6709\u5bbf\u4e3b\u673a\u540d\u79f0\u3002",
"type": "文本",
"example": "testDedicatedHostName"
},
"DedicatedHostClusterId": {
"description": "\u4e13\u6709\u5bbf\u4e3b\u673a\u96c6\u7fa4ID\u3002",
"type": "文本",
"example": "dc-bp67acfmxazb4h****"
}
},
"desc": "\u7531\u4e13\u6709\u5bbf\u4e3b\u673a\u96c6\u7fa4ID\uff08DedicatedHostClusterId\uff09\u3001\u4e13\u6709\u5bbf\u4e3b\u673aID\uff08DedicatedHostId\uff09\u548c\u540d\u79f0\uff08DedicatedHostName\uff09\u7ec4\u6210\u7684\u5bbf\u4e3b\u673a\u5c5e\u6027\u6570\u7ec4\u3002"
},
{
"name": "EcsCapacityReservationAttr",
"type": "json",
"example": {
"CapacityReservationPreference": {
"description": "\u5bb9\u91cf\u9884\u7559\u504f\u597d\u3002",
"type": "文本",
"example": "cr-bp67acfmxazb4p****"
},
"CapacityReservationId": {
"description": "\u5bb9\u91cf\u9884\u7559ID\u3002",
"type": "文本",
"example": "cr-bp67acfmxazb4p****"
}
},
"desc": "\u4e91\u670d\u52a1\u5668ECS\u7684\u5bb9\u91cf\u9884\u7559\u76f8\u5173\u53c2\u6570\u3002"
},
{
"name": "DedicatedInstanceAttribute",
"type": "json",
"example": {
"Affinity": {
"description": "\u4e13\u6709\u5bbf\u4e3b\u673a\u5b9e\u4f8b\u662f\u5426\u4e0e\u4e13\u6709\u5bbf\u4e3b\u673a\u5173\u8054\u3002\u53ef\u80fd\u503c\uff1a\n\n- default\uff1a\u4e13\u6709\u5bbf\u4e3b\u673a\u5b9e\u4f8b\u4e0d\u4e0e\u4e13\u6709\u5bbf\u4e3b\u673a\u5173\u8054\u3002\u505c\u673a\u4e0d\u6536\u8d39\u5b9e\u4f8b\u91cd\u542f\u540e\uff0c\u53ef\u80fd\u4f1a\u653e\u7f6e\u5728\u81ea\u52a8\u8d44\u6e90\u90e8\u7f72\u6c60\u4e2d\u7684\u5176\u5b83\u4e13\u6709\u5bbf\u4e3b\u673a\u4e0a\u3002\n\n- host\uff1a\u4e13\u6709\u5bbf\u4e3b\u673a\u5b9e\u4f8b\u4e0e\u4e13\u6709\u5bbf\u4e3b\u673a\u5173\u8054\u3002\u505c\u673a\u4e0d\u6536\u8d39\u5b9e\u4f8b\u91cd\u542f\u540e\uff0c\u4ecd\u653e\u7f6e\u5728\u539f\u4e13\u6709\u5bbf\u4e3b\u673a\u4e0a\u3002",
"type": "文本",
"example": "default"
},
"Tenancy": {
"description": "\u5b9e\u4f8b\u7684\u5bbf\u4e3b\u673a\u7c7b\u578b\u662f\u5426\u4e3a\u4e13\u6709\u5bbf\u4e3b\u673a\u3002\u53ef\u80fd\u503c\uff1a\n\n- default\uff1a\u5b9e\u4f8b\u7684\u5bbf\u4e3b\u673a\u7c7b\u578b\u4e0d\u662f\u4e13\u6709\u5bbf\u4e3b\u673a\u3002\n\n- host\uff1a\u5b9e\u4f8b\u7684\u5bbf\u4e3b\u673a\u7c7b\u578b\u4e3a\u4e13\u6709\u5bbf\u4e3b\u673a\u3002",
"type": "文本",
"example": "default"
}
},
"desc": "\u4e13\u6709\u5bbf\u4e3b\u673a\u5b9e\u4f8b\u7684\u5c5e\u6027\u3002"
},
{
"name": "CpuOptions",
"type": "json",
"example": {
"Numa": {
"description": "\u5206\u914d\u7684\u7ebf\u7a0b\u6570\u3002\u53ef\u80fd\u503c\u4e3a2\u3002",
"type": "文本",
"example": "2"
},
"CoreCount": {
"description": "\u7269\u7406CPU\u6838\u5fc3\u6570\u3002",
"type": "整数",
"format": "int32",
"example": "2"
},
"ThreadsPerCore": {
"description": "CPU\u7ebf\u7a0b\u6570\u3002",
"type": "整数",
"format": "int32",
"example": "4"
}
},
"desc": "CPU\u914d\u7f6e\u8be6\u60c5\u3002"
},
{
"name": "MetadataOptions",
"type": "json",
"example": {
"HttpEndpoint": {
"description": "\u662f\u5426\u542f\u7528\u5b9e\u4f8b\u5143\u6570\u636e\u7684\u8bbf\u95ee\u901a\u9053\u3002\u53ef\u80fd\u503c\uff1a\n- enabled\uff1a\u542f\u7528\u3002\n- disabled\uff1a\u7981\u7528\u3002",
"type": "文本",
"example": "enabled"
},
"HttpPutResponseHopLimit": {
"description": "> \u8be5\u53c2\u6570\u6682\u672a\u5f00\u653e\u4f7f\u7528\u3002",
"type": "整数",
"format": "int32",
"example": "0"
},
"HttpTokens": {
"description": "\u8bbf\u95ee\u5b9e\u4f8b\u5143\u6570\u636e\u65f6\u662f\u5426\u5f3a\u5236\u4f7f\u7528\u52a0\u56fa\u6a21\u5f0f\uff08IMDSv2\uff09\u3002\u53ef\u80fd\u503c\uff1a\n- optional\uff1a\u4e0d\u5f3a\u5236\u4f7f\u7528\u3002\n- required\uff1a\u5f3a\u5236\u4f7f\u7528\u3002",
"type": "文本",
"example": "optional"
}
},
"desc": "\u5143\u6570\u636e\u9009\u9879\u96c6\u5408\u3002"
},
{
"name": "ImageOptions",
"type": "json",
"example": {
"LoginAsNonRoot": {
"description": "\u4f7f\u7528\u8be5\u955c\u50cf\u7684\u5b9e\u4f8b\u662f\u5426\u652f\u6301\u4f7f\u7528ecs-user\u7528\u6237\u767b\u5f55\u3002\u53ef\u80fd\u503c\uff1a\n\n- true\uff1a\u662f\n\n- false\uff1a\u5426",
"type": "boolean",
"example": "false"
}
},
"desc": "\u955c\u50cf\u76f8\u5173\u5c5e\u6027\u4fe1\u606f\u3002"
}
]

View File

@ -0,0 +1,427 @@
[
{
"name": "amiLaunchIndex",
"type": "整数",
"desc": "The AMI launch index, which can be used to find this instance in the launch group.",
"example": "0"
},
{
"name": "architecture",
"type": "文本",
"desc": "The architecture of the image.",
"example": "x86_64"
},
{
"name": "blockDeviceMapping",
"type": "json",
"desc": "Any block device mapping entries for the instance.",
"example": {
"item": {
"deviceName": "/dev/xvda",
"ebs": {
"volumeId": "vol-1234567890abcdef0",
"status": "attached",
"attachTime": "2015-12-22T10:44:09.000Z",
"deleteOnTermination": "true"
}
}
}
},
{
"name": "bootMode",
"type": "文本",
"desc": "The boot mode that was specified by the AMI. If the value is uefi-preferred, the AMI supports both UEFI and Legacy BIOS. The currentInstanceBootMode parameter is the boot mode that is used to boot the instance at launch or start.",
"example": null
},
{
"name": "capacityReservationId",
"type": "文本",
"desc": "The ID of the Capacity Reservation.",
"example": null
},
{
"name": "capacityReservationSpecification",
"type": "json",
"desc": "Information about the Capacity Reservation targeting option.",
"example": null
},
{
"name": "clientToken",
"type": "文本",
"desc": "The idempotency token you provided when you launched the instance, if applicable.",
"example": "xMcwG14507example"
},
{
"name": "cpuOptions",
"type": "json",
"desc": "The CPU options for the instance.",
"example": {
"coreCount": "1",
"threadsPerCore": "1"
}
},
{
"name": "currentInstanceBootMode",
"type": "文本",
"desc": "The boot mode that is used to boot the instance at launch or start. For more information, see Boot modes in the Amazon EC2 User Guide.",
"example": null
},
{
"name": "dnsName",
"type": "文本",
"desc": "[IPv4 only] The public DNS name assigned to the instance. This name is not available until the instance enters the running state. This name is only available if you've enabled DNS hostnames for your VPC.",
"example": "ec2-54-194-252-215.eu-west-1.compute.amazonaws.com"
},
{
"name": "ebsOptimized",
"type": "Boolean",
"desc": "Indicates whether the instance is optimized for Amazon EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS Optimized instance.",
"example": "false"
},
{
"name": "elasticGpuAssociationSet",
"type": "json",
"desc": "The Elastic GPU associated with the instance.",
"example": null
},
{
"name": "elasticInferenceAcceleratorAssociationSet",
"type": "json",
"desc": "The elastic inference accelerator associated with the instance.",
"example": null
},
{
"name": "enaSupport",
"type": "Boolean",
"desc": "Specifies whether enhanced networking with ENA is enabled.",
"example": null
},
{
"name": "enclaveOptions",
"type": "json",
"desc": "Indicates whether the instance is enabled for AWS Nitro Enclaves.",
"example": null
},
{
"name": "groupSet",
"type": "json",
"desc": "The security groups for the instance.",
"example": {
"item": {
"groupId": "sg-e4076980",
"groupName": "SecurityGroup1"
}
}
},
{
"name": "hibernationOptions",
"type": "json",
"desc": "Indicates whether the instance is enabled for hibernation.",
"example": null
},
{
"name": "hypervisor",
"type": "文本",
"desc": "The hypervisor type of the instance. The value xen is used for both Xen and Nitro hypervisors.",
"example": "xen"
},
{
"name": "iamInstanceProfile",
"type": "json",
"desc": "The IAM instance profile associated with the instance, if applicable.",
"example": {
"arn": "arn:aws:iam::123456789012:instance-profile/AdminRole",
"id": "ABCAJEDNCAA64SSD123AB"
}
},
{
"name": "imageId",
"type": "文本",
"desc": "The ID of the AMI used to launch the instance.",
"example": "ami-bff32ccc"
},
{
"name": "instanceId",
"type": "文本",
"desc": "The ID of the instance.",
"example": "i-1234567890abcdef0"
},
{
"name": "instanceLifecycle",
"type": "文本",
"desc": "Indicates whether this is a Spot Instance or a Scheduled Instance.",
"example": null
},
{
"name": "instanceState",
"type": "json",
"desc": "The current state of the instance.",
"example": {
"code": "16",
"name": "running"
}
},
{
"name": "instanceType",
"type": "文本",
"desc": "The instance type.",
"example": "t2.micro"
},
{
"name": "ipAddress",
"type": "文本",
"desc": "The public IPv4 address, or the Carrier IP address assigned to the instance, if applicable.",
"example": "54.194.252.215"
},
{
"name": "ipv6Address",
"type": "文本",
"desc": "The IPv6 address assigned to the instance.",
"example": null
},
{
"name": "kernelId",
"type": "文本",
"desc": "The kernel associated with this instance, if applicable.",
"example": null
},
{
"name": "keyName",
"type": "文本",
"desc": "The name of the key pair, if this instance was launched with an associated key pair.",
"example": "my_keypair"
},
{
"name": "launchTime",
"type": "Time",
"desc": "The time the instance was launched.",
"example": "2018-05-08T16:46:19.000Z"
},
{
"name": "licenseSet",
"type": "json",
"desc": "The license configurations for the instance.",
"example": null
},
{
"name": "maintenanceOptions",
"type": "json",
"desc": "Provides information on the recovery and maintenance options of your instance.",
"example": null
},
{
"name": "metadataOptions",
"type": "json",
"desc": "The metadata options for the instance.",
"example": null
},
{
"name": "monitoring",
"type": "json",
"desc": "The monitoring for the instance.",
"example": {
"state": "disabled"
}
},
{
"name": "networkInterfaceSet",
"type": "json",
"desc": "The network interfaces for the instance.",
"example": {
"item": {
"networkInterfaceId": "eni-551ba033",
"subnetId": "subnet-56f5f633",
"vpcId": "vpc-11112222",
"description": "Primary network interface",
"ownerId": "123456789012",
"status": "in-use",
"macAddress": "02:dd:2c:5e:01:69",
"privateIpAddress": "192.168.1.88",
"privateDnsName": "ip-192-168-1-88.eu-west-1.compute.internal",
"sourceDestCheck": "true",
"groupSet": {
"item": {
"groupId": "sg-e4076980",
"groupName": "SecurityGroup1"
}
},
"attachment": {
"attachmentId": "eni-attach-39697adc",
"deviceIndex": "0",
"status": "attached",
"attachTime": "2018-05-08T16:46:19.000Z",
"deleteOnTermination": "true"
},
"association": {
"publicIp": "54.194.252.215",
"publicDnsName": "ec2-54-194-252-215.eu-west-1.compute.amazonaws.com",
"ipOwnerId": "amazon"
},
"privateIpAddressesSet": {
"item": {
"privateIpAddress": "192.168.1.88",
"privateDnsName": "ip-192-168-1-88.eu-west-1.compute.internal",
"primary": "true",
"association": {
"publicIp": "54.194.252.215",
"publicDnsName": "ec2-54-194-252-215.eu-west-1.compute.amazonaws.com",
"ipOwnerId": "amazon"
}
}
},
"ipv6AddressesSet": {
"item": {
"ipv6Address": "2001:db8:1234:1a2b::123"
}
}
}
}
},
{
"name": "outpostArn",
"type": "文本",
"desc": "The Amazon Resource Name (ARN) of the Outpost.",
"example": null
},
{
"name": "placement",
"type": "json",
"desc": "The location where the instance launched, if applicable.",
"example": {
"availabilityZone": "eu-west-1c",
"groupName": null,
"tenancy": "default"
}
},
{
"name": "platform",
"type": "文本",
"desc": "The value is Windows for Windows instances; otherwise blank.",
"example": null
},
{
"name": "platformDetails",
"type": "文本",
"desc": "The platform details value for the instance. For more information, see AMI billing information fields in the Amazon EC2 User Guide.",
"example": null
},
{
"name": "privateDnsName",
"type": "文本",
"desc": "[IPv4 only] The private DNS hostname name assigned to the instance. This DNS hostname can only be used inside the Amazon EC2 network. This name is not available until the instance enters the running state.",
"example": "ip-192-168-1-88.eu-west-1.compute.internal"
},
{
"name": "privateDnsNameOptions",
"type": "json",
"desc": "The options for the instance hostname.",
"example": null
},
{
"name": "privateIpAddress",
"type": "文本",
"desc": "The private IPv4 address assigned to the instance.",
"example": "192.168.1.88"
},
{
"name": "productCodes",
"type": "json",
"desc": "The product codes attached to this instance, if applicable.",
"example": null
},
{
"name": "ramdiskId",
"type": "文本",
"desc": "The RAM disk associated with this instance, if applicable.",
"example": null
},
{
"name": "reason",
"type": "文本",
"desc": "The reason for the most recent state transition. This might be an empty string.",
"example": null
},
{
"name": "rootDeviceName",
"type": "文本",
"desc": "The device name of the root device volume (for example, /dev/sda1).",
"example": "/dev/xvda"
},
{
"name": "rootDeviceType",
"type": "文本",
"desc": "The root device type used by the AMI. The AMI can use an EBS volume or an instance store volume.",
"example": "ebs"
},
{
"name": "sourceDestCheck",
"type": "Boolean",
"desc": "Indicates whether source/destination checking is enabled.",
"example": "true"
},
{
"name": "spotInstanceRequestId",
"type": "文本",
"desc": "If the request is a Spot Instance request, the ID of the request.",
"example": null
},
{
"name": "sriovNetSupport",
"type": "文本",
"desc": "Specifies whether enhanced networking with the Intel 82599 Virtual Function interface is enabled.",
"example": null
},
{
"name": "stateReason",
"type": "json",
"desc": "The reason for the most recent state transition.",
"example": null
},
{
"name": "subnetId",
"type": "文本",
"desc": "The ID of the subnet in which the instance is running.",
"example": "subnet-56f5f633"
},
{
"name": "tagSet",
"type": "json",
"desc": "Any tags assigned to the instance.",
"example": {
"item": {
"key": "Name",
"value": "Server_1"
}
}
},
{
"name": "tpmSupport",
"type": "文本",
"desc": "If the instance is configured for NitroTPM support, the value is v2.0. For more information, see NitroTPM in the Amazon EC2 User Guide.",
"example": null
},
{
"name": "usageOperation",
"type": "文本",
"desc": "The usage operation value for the instance. For more information, see AMI billing information fields in the Amazon EC2 User Guide.",
"example": null
},
{
"name": "usageOperationUpdateTime",
"type": "Time",
"desc": "The time that the usage operation was last updated.",
"example": null
},
{
"name": "virtualizationType",
"type": "文本",
"desc": "The virtualization type of the instance.",
"example": "hvm"
},
{
"name": "vpcId",
"type": "文本",
"desc": "The ID of the VPC in which the instance is running.",
"example": "vpc-11112222"
}
]

View File

@ -0,0 +1,292 @@
[
{
"name": "status",
"type": "文本",
"example": "ACTIVE",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u72b6\u6001\u3002\n\n\u53d6\u503c\u8303\u56f4:\n\nACTIVE\u3001BUILD\u3001DELETED\u3001ERROR\u3001HARD_REBOOT\u3001MIGRATING\u3001PAUSED\u3001REBOOT\u3001REBUILD\u3001RESIZE\u3001REVERT_RESIZE\u3001SHUTOFF\u3001SHELVED\u3001SHELVED_OFFLOADED\u3001SOFT_DELETED\u3001SUSPENDED\u3001VERIFY_RESIZE\n\n\u5f39\u6027\u4e91\u670d\u52a1\u5668\u72b6\u6001\u8bf4\u660e\u8bf7\u53c2\u8003[\u4e91\u670d\u52a1\u5668\u72b6\u6001](https://support.huaweicloud.com/api-ecs/ecs_08_0002.html)"
},
{
"name": "updated",
"type": "文本",
"example": "2019-05-22T03:30:52Z",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u66f4\u65b0\u65f6\u95f4\u3002\n\n\u65f6\u95f4\u683c\u5f0f\u4f8b\u5982:2019-05-22T03:30:52Z"
},
{
"name": "auto_terminate_time",
"type": "文本",
"example": "2020-01-19T03:30:52Z",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u81ea\u52a8\u91ca\u653e\u65f6\u95f4\u3002\n\n\u65f6\u95f4\u683c\u5f0f\u4f8b\u5982:2020-01-19T03:30:52Z"
},
{
"name": "hostId",
"type": "文本",
"example": "c7145889b2e3202cd295ceddb1742ff8941b827b586861fd0acedf64",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u6240\u5728\u4e3b\u673a\u7684\u4e3b\u673aID\u3002"
},
{
"name": "OS-EXT-SRV-ATTR:host",
"type": "文本",
"example": "pod01.cn-north-1c",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u6240\u5728\u4e3b\u673a\u7684\u4e3b\u673a\u540d\u79f0\u3002"
},
{
"name": "addresses",
"type": "json",
"example": null,
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7684\u7f51\u7edc\u5c5e\u6027\u3002"
},
{
"name": "key_name",
"type": "文本",
"example": "KeyPair-test",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u4f7f\u7528\u7684\u5bc6\u94a5\u5bf9\u540d\u79f0\u3002"
},
{
"name": "image",
"type": "json",
"example": null,
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u955c\u50cf\u4fe1\u606f\u3002"
},
{
"name": "OS-EXT-STS:task_state",
"type": "文本",
"example": "rebooting",
"desc": "\u6269\u5c55\u5c5e\u6027,\u5f39\u6027\u4e91\u670d\u52a1\u5668\u5f53\u524d\u4efb\u52a1\u7684\u72b6\u6001\u3002\n\n\u53d6\u503c\u8303\u56f4\u8bf7\u53c2\u8003[\u4e91\u670d\u52a1\u5668\u72b6\u6001](https://support.huaweicloud.com/api-ecs/ecs_08_0002.html)\u88683\u3002"
},
{
"name": "OS-EXT-STS:vm_state",
"type": "文本",
"example": "active",
"desc": "\u6269\u5c55\u5c5e\u6027,\u5f39\u6027\u4e91\u670d\u52a1\u5668\u5f53\u524d\u72b6\u6001\u3002\n\n\u4e91\u670d\u52a1\u5668\u72b6\u6001\u8bf4\u660e\u8bf7\u53c2\u8003[\u4e91\u670d\u52a1\u5668\u72b6\u6001](https://support.huaweicloud.com/api-ecs/ecs_08_0002.html)\u3002"
},
{
"name": "OS-EXT-SRV-ATTR:instance_name",
"type": "文本",
"example": "instance-0048a91b",
"desc": "\u6269\u5c55\u5c5e\u6027,\u5f39\u6027\u4e91\u670d\u52a1\u5668\u522b\u540d\u3002"
},
{
"name": "OS-EXT-SRV-ATTR:hypervisor_hostname",
"type": "文本",
"example": "nova022@36",
"desc": "\u6269\u5c55\u5c5e\u6027,\u5f39\u6027\u4e91\u670d\u52a1\u5668\u6240\u5728\u865a\u62df\u5316\u4e3b\u673a\u540d\u3002"
},
{
"name": "flavor",
"type": "json",
"example": null,
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u89c4\u683c\u4fe1\u606f\u3002"
},
{
"name": "id",
"type": "文本",
"example": "4f4b3dfa-eb70-47cf-a60a-998a53bd6666",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668ID,\u683c\u5f0f\u4e3aUUID\u3002"
},
{
"name": "security_groups",
"type": "json",
"example": {
"$ref": "#/definitions/ServerSecurityGroup"
},
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u6240\u5c5e\u5b89\u5168\u7ec4\u5217\u8868\u3002"
},
{
"name": "OS-EXT-AZ:availability_zone",
"type": "文本",
"example": "cn-north-1c",
"desc": "\u6269\u5c55\u5c5e\u6027,\u5f39\u6027\u4e91\u670d\u52a1\u5668\u6240\u5728\u53ef\u7528\u533a\u540d\u79f0\u3002"
},
{
"name": "user_id",
"type": "文本",
"example": "05498fe56b8010d41f7fc01e280b6666",
"desc": "\u521b\u5efa\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7684\u7528\u6237ID,\u683c\u5f0f\u4e3aUUID\u3002"
},
{
"name": "name",
"type": "文本",
"example": "ecs-test-server",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u540d\u79f0\u3002"
},
{
"name": "created",
"type": "文本",
"example": "2017-07-15T11:30:52Z",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u521b\u5efa\u65f6\u95f4\u3002\n\n\u65f6\u95f4\u683c\u5f0f\u4f8b\u5982:2019-05-22T03:19:19Z"
},
{
"name": "tenant_id",
"type": "文本",
"example": "743b4c0428d94531b9f2add666646666",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u6240\u5c5e\u79df\u6237ID,\u5373\u9879\u76eeid,\u548cproject_id\u8868\u793a\u76f8\u540c\u7684\u6982\u5ff5,\u683c\u5f0f\u4e3aUUID\u3002"
},
{
"name": "OS-DCF:diskConfig",
"type": "文本",
"example": "AUTO",
"desc": "\u6269\u5c55\u5c5e\u6027, diskConfig\u7684\u7c7b\u578b\u3002\n\n- MANUAL,\u955c\u50cf\u7a7a\u95f4\u4e0d\u4f1a\u6269\u5c55\u3002\n- AUTO,\u7cfb\u7edf\u76d8\u955c\u50cf\u7a7a\u95f4\u4f1a\u81ea\u52a8\u6269\u5c55\u4e3a\u4e0eflavor\u5927\u5c0f\u4e00\u81f4\u3002"
},
{
"name": "accessIPv4",
"type": "文本",
"example": null,
"desc": "\u9884\u7559\u5c5e\u6027\u3002"
},
{
"name": "accessIPv6",
"type": "文本",
"example": null,
"desc": "\u9884\u7559\u5c5e\u6027\u3002"
},
{
"name": "fault",
"type": "文本",
"example": null,
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u6545\u969c\u4fe1\u606f\u3002\n\n\u53ef\u9009\u53c2\u6570,\u5728\u5f39\u6027\u4e91\u670d\u52a1\u5668\u72b6\u6001\u4e3aERROR\u4e14\u5b58\u5728\u5f02\u5e38\u7684\u60c5\u51b5\u4e0b\u8fd4\u56de\u3002"
},
{
"name": "progress",
"type": "整数",
"example": null,
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u8fdb\u5ea6\u3002"
},
{
"name": "OS-EXT-STS:power_state",
"type": "整数",
"example": 4,
"desc": "\u6269\u5c55\u5c5e\u6027,\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7535\u6e90\u72b6\u6001\u3002"
},
{
"name": "config_drive",
"type": "文本",
"example": null,
"desc": "config drive\u4fe1\u606f\u3002"
},
{
"name": "metadata",
"type": "json",
"example": null,
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u5143\u6570\u636e\u3002\n\n> \u8bf4\u660e:\n> \n> \u5143\u6570\u636e\u5305\u542b\u7cfb\u7edf\u9ed8\u8ba4\u6dfb\u52a0\u5b57\u6bb5\u548c\u7528\u6237\u8bbe\u7f6e\u7684\u5b57\u6bb5\u3002\n\n\u7cfb\u7edf\u9ed8\u8ba4\u6dfb\u52a0\u5b57\u6bb5\n\n1. charging_mode\n\u4e91\u670d\u52a1\u5668\u7684\u8ba1\u8d39\u7c7b\u578b\u3002\n\n- \u201c0\u201d:\u6309\u9700\u8ba1\u8d39(\u5373postPaid-\u540e\u4ed8\u8d39\u65b9\u5f0f)\u3002\n- \u201c1\u201d:\u6309\u5305\u5e74\u5305\u6708\u8ba1\u8d39(\u5373prePaid-\u9884\u4ed8\u8d39\u65b9\u5f0f)\u3002\"2\":\u7ade\u4ef7\u5b9e\u4f8b\u8ba1\u8d39\n\n2. metering.order_id\n\u6309\u201c\u5305\u5e74/\u5305\u6708\u201d\u8ba1\u8d39\u7684\u4e91\u670d\u52a1\u5668\u5bf9\u5e94\u7684\u8ba2\u5355ID\u3002\n\n3. metering.product_id\n\u6309\u201c\u5305\u5e74/\u5305\u6708\u201d\u8ba1\u8d39\u7684\u4e91\u670d\u52a1\u5668\u5bf9\u5e94\u7684\u4ea7\u54c1ID\u3002\n\n4. vpc_id\n\u4e91\u670d\u52a1\u5668\u6240\u5c5e\u7684\u865a\u62df\u79c1\u6709\u4e91ID\u3002\n\n5. EcmResStatus\n\u4e91\u670d\u52a1\u5668\u7684\u51bb\u7ed3\u72b6\u6001\u3002\n\n- normal:\u4e91\u670d\u52a1\u5668\u6b63\u5e38\u72b6\u6001(\u672a\u88ab\u51bb\u7ed3)\u3002\n- freeze:\u4e91\u670d\u52a1\u5668\u88ab\u51bb\u7ed3\u3002\n\n> \u5f53\u4e91\u670d\u52a1\u5668\u88ab\u51bb\u7ed3\u6216\u8005\u89e3\u51bb\u540e,\u7cfb\u7edf\u9ed8\u8ba4\u6dfb\u52a0\u8be5\u5b57\u6bb5,\u4e14\u8be5\u5b57\u6bb5\u5fc5\u9009\u3002\n\n6. metering.image_id\n\u4e91\u670d\u52a1\u5668\u64cd\u4f5c\u7cfb\u7edf\u5bf9\u5e94\u7684\u955c\u50cfID\n\n7. metering.imagetype\n\u955c\u50cf\u7c7b\u578b,\u76ee\u524d\u652f\u6301:\n\n- \u516c\u5171\u955c\u50cf(gold)\n- \u79c1\u6709\u955c\u50cf(private)\n- \u5171\u4eab\u955c\u50cf(shared)\n\n8. metering.resourcespeccode\n\u4e91\u670d\u52a1\u5668\u5bf9\u5e94\u7684\u8d44\u6e90\u89c4\u683c\u3002\n\n9. image_name\n\u4e91\u670d\u52a1\u5668\u64cd\u4f5c\u7cfb\u7edf\u5bf9\u5e94\u7684\u955c\u50cf\u540d\u79f0\u3002\n\n10. os_bit\n\u64cd\u4f5c\u7cfb\u7edf\u4f4d\u6570,\u4e00\u822c\u53d6\u503c\u4e3a\u201c32\u201d\u6216\u8005\u201c64\u201d\u3002\n\n11. lockCheckEndpoint\n\u56de\u8c03URL,\u7528\u4e8e\u68c0\u67e5\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7684\u52a0\u9501\u662f\u5426\u6709\u6548\u3002\n\n- \u5982\u679c\u6709\u6548,\u5219\u4e91\u670d\u52a1\u5668\u4fdd\u6301\u9501\u5b9a\u72b6\u6001\u3002\n- \u5982\u679c\u65e0\u6548,\u89e3\u9664\u9501\u5b9a\u72b6\u6001,\u5220\u9664\u5931\u6548\u7684\u9501\u3002\n\n12. lockSource\n\u5f39\u6027\u4e91\u670d\u52a1\u5668\u6765\u81ea\u54ea\u4e2a\u670d\u52a1\u3002\u8ba2\u5355\u52a0\u9501(ORDER)\n\n13. lockSourceId\n\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7684\u52a0\u9501\u6765\u81ea\u54ea\u4e2aID\u3002lockSource\u4e3a\u201cORDER\u201d\u65f6,lockSourceId\u4e3a\u8ba2\u5355ID\u3002\n\n14. lockScene\n\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7684\u52a0\u9501\u7c7b\u578b\u3002\n\n- \u6309\u9700\u8f6c\u5305\u5468\u671f(TO_PERIOD_LOCK)\n\n15. virtual_env_type\n\n- IOS\u955c\u50cf\u521b\u5efa\u865a\u62df\u673a,\"virtual_env_type\": \"IsoImage\" \u5c5e\u6027;\n- \u975eIOS\u955c\u50cf\u521b\u5efa\u865a\u62df\u673a,\u572819.5.0\u7248\u672c\u4ee5\u540e\u521b\u5efa\u7684\u865a\u62df\u673a\u5c06\u4e0d\u4f1a\u6dfb\u52a0virtual_env_type \u5c5e\u6027,\u800c\u5728\u6b64\u4e4b\u524d\u7684\u7248\u672c\u521b\u5efa\u7684\u865a\u62df\u673a\u53ef\u80fd\u4f1a\u8fd4\u56de\"virtual_env_type\": \"FusionCompute\"\u5c5e\u6027 \u3002\n\n> virtual_env_type\u5c5e\u6027\u4e0d\u5141\u8bb8\u7528\u6237\u589e\u52a0\u3001\u5220\u9664\u548c\u4fee\u6539\u3002\n\n16. metering.resourcetype\n\u4e91\u670d\u52a1\u5668\u5bf9\u5e94\u7684\u8d44\u6e90\u7c7b\u578b\u3002\n\n17. os_type\n\u64cd\u4f5c\u7cfb\u7edf\u7c7b\u578b,\u53d6\u503c\u4e3a:Linux\u3001Windows\u3002\n\n18. cascaded.instance_extrainfo\n\u7cfb\u7edf\u5185\u90e8\u865a\u62df\u673a\u6269\u5c55\u4fe1\u606f\u3002\n\n19. __support_agent_list\n\u4e91\u670d\u52a1\u5668\u662f\u5426\u652f\u6301\u4f01\u4e1a\u4e3b\u673a\u5b89\u5168\u3001\u4e3b\u673a\u76d1\u63a7\u3002\n\n- \u201chss\u201d:\u4f01\u4e1a\u4e3b\u673a\u5b89\u5168\n- \u201cces\u201d:\u4e3b\u673a\u76d1\u63a7\n\n20. agency_name\n\u59d4\u6258\u7684\u540d\u79f0\u3002\n\n\u59d4\u6258\u662f\u7531\u79df\u6237\u7ba1\u7406\u5458\u5728\u7edf\u4e00\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1(Identity and Access Management,IAM)\u4e0a\u521b\u5efa\u7684,\u53ef\u4ee5\u4e3a\u5f39\u6027\u4e91\u670d\u52a1\u5668\u63d0\u4f9b\u8bbf\u95ee\u4e91\u670d\u52a1\u7684\u4e34\u65f6\u51ed\u8bc1\u3002"
},
{
"name": "OS-SRV-USG:launched_at",
"type": "文本",
"example": "2018-08-15T14:21:22.000000",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u542f\u52a8\u65f6\u95f4\u3002\u65f6\u95f4\u683c\u5f0f\u4f8b\u5982:2019-05-22T03:23:59.000000"
},
{
"name": "OS-SRV-USG:terminated_at",
"type": "文本",
"example": "2019-05-22T03:23:59.000000",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u5220\u9664\u65f6\u95f4\u3002\n\n\u65f6\u95f4\u683c\u5f0f\u4f8b\u5982:2019-05-22T03:23:59.000000"
},
{
"name": "os-extended-volumes:volumes_attached",
"type": "json",
"example": {
"$ref": "#/definitions/ServerExtendVolumeAttachment"
},
"desc": "\u6302\u8f7d\u5230\u5f39\u6027\u4e91\u670d\u52a1\u5668\u4e0a\u7684\u78c1\u76d8\u3002"
},
{
"name": "description",
"type": "文本",
"example": "ecs description",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7684\u63cf\u8ff0\u4fe1\u606f\u3002"
},
{
"name": "host_status",
"type": "文本",
"example": "UP",
"desc": "nova-compute\u72b6\u6001\u3002\n\n- UP:\u670d\u52a1\u6b63\u5e38\n- UNKNOWN:\u72b6\u6001\u672a\u77e5\n- DOWN:\u670d\u52a1\u5f02\u5e38\n- MAINTENANCE:\u7ef4\u62a4\u72b6\u6001\n- \u7a7a\u5b57\u7b26\u4e32:\u5f39\u6027\u4e91\u670d\u52a1\u5668\u65e0\u4e3b\u673a\u4fe1\u606f"
},
{
"name": "OS-EXT-SRV-ATTR:hostname",
"type": "文本",
"example": null,
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7684\u4e3b\u673a\u540d\u3002"
},
{
"name": "OS-EXT-SRV-ATTR:reservation_id",
"type": "文本",
"example": "r-f06p3js8",
"desc": "\u6279\u91cf\u521b\u5efa\u573a\u666f,\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7684\u9884\u7559ID\u3002"
},
{
"name": "OS-EXT-SRV-ATTR:launch_index",
"type": "整数",
"example": null,
"desc": "\u6279\u91cf\u521b\u5efa\u573a\u666f,\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7684\u542f\u52a8\u987a\u5e8f\u3002"
},
{
"name": "OS-EXT-SRV-ATTR:kernel_id",
"type": "文本",
"example": null,
"desc": "\u82e5\u4f7f\u7528AMI\u683c\u5f0f\u7684\u955c\u50cf,\u5219\u8868\u793akernel image\u7684UUID;\u5426\u5219,\u7559\u7a7a\u3002"
},
{
"name": "OS-EXT-SRV-ATTR:ramdisk_id",
"type": "文本",
"example": null,
"desc": "\u82e5\u4f7f\u7528AMI\u683c\u5f0f\u955c\u50cf,\u5219\u8868\u793aramdisk image\u7684UUID;\u5426\u5219,\u7559\u7a7a\u3002"
},
{
"name": "OS-EXT-SRV-ATTR:root_device_name",
"type": "文本",
"example": "/dev/vda",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7cfb\u7edf\u76d8\u7684\u8bbe\u5907\u540d\u79f0\u3002"
},
{
"name": "OS-EXT-SRV-ATTR:user_data",
"type": "文本",
"example": "IyEvYmluL2Jhc2gKZWNobyAncm9vdDokNiRjcGRkSjckWm5WZHNiR253Z0l0SGlxUjZxbWtLTlJaeU9lZUtKd3dPbG9XSFdUeGFzWjA1STYwdnJYRTdTUTZGbEpFbWlXZ21WNGNmZ1pac1laN1BkMTBLRndyeC8nIHwgY2hwYXNzd2Q6666",
"desc": "\u521b\u5efa\u5f39\u6027\u4e91\u670d\u52a1\u5668\u65f6\u6307\u5b9a\u7684user_data\u3002"
},
{
"name": "locked",
"type": "boolean",
"example": null,
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u662f\u5426\u4e3a\u9501\u5b9a\u72b6\u6001\u3002\n\n- true:\u9501\u5b9a\n- false:\u672a\u9501\u5b9a"
},
{
"name": "tags",
"type": "文本、多值",
"example": {
"type": "文本"
},
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u6807\u7b7e\u3002"
},
{
"name": "os:scheduler_hints",
"type": "json",
"example": null,
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u8c03\u5ea6\u4fe1\u606f"
},
{
"name": "enterprise_project_id",
"type": "文本",
"example": "0",
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u6240\u5c5e\u7684\u4f01\u4e1a\u9879\u76eeID\u3002"
},
{
"name": "sys_tags",
"type": "文本、多值",
"example": {
"$ref": "#/definitions/ServerSystemTag"
},
"desc": "\u5f39\u6027\u4e91\u670d\u52a1\u5668\u7cfb\u7edf\u6807\u7b7e\u3002"
},
{
"name": "cpu_options",
"type": "json",
"example": null,
"desc": "\u81ea\u5b9a\u4e49CPU\u9009\u9879\u3002"
},
{
"name": "hypervisor",
"type": "文本",
"example": null,
"desc": "hypervisor\u4fe1\u606f\u3002"
}
]

View File

@ -0,0 +1,37 @@
[{
"name":"manufacturer",
"type": "文本",
"example":"HUAWEI Technology Co.,Ltd",
"desc":"制造产商"
},{
"name":"sn",
"type": "文本",
"example":"102030059898",
"desc":"设备序列号"
},{
"name":"device_name",
"type": "文本",
"example":"USG6525E",
"desc":"设备名称"
},{
"name":"device_model",
"type": "文本",
"example":"2011.2.321.1.205",
"desc":"设备细分类型 结合相关产商获取相应的产品类型"
},{
"name":"description",
"type": "文本",
"example":"Huawei Vwersatile Routing Platform Software",
"desc":"设备描述"
},{
"name":"manager_ip",
"type": "文本",
"example":"192.168.1.1",
"desc":"管理ip"
}, {
"name":"ips",
"type": "文本、多值",
"example":"192.168.1.1, 192.168.1.2",
"desc":"ips"
}
]

View File

@ -0,0 +1,297 @@
[
{
"name": "Placement",
"type": "json",
"desc": "实例所在的位置。",
"example": {
"HostId": "host-h3m57oik",
"ProjectId": 1174660,
"HostIds": [],
"Zone": "ap-guangzhou-1",
"HostIps": []
}
},
{
"name": "InstanceId",
"type": "文本",
"desc": "实例ID。",
"example": "ins-xlsyru2j"
},
{
"name": "InstanceType",
"type": "文本",
"desc": "实例机型。",
"example": "S2.SMALL2"
},
{
"name": "CPU",
"type": "整数",
"desc": "实例的CPU核数单位核。",
"example": 1
},
{
"name": "Memory",
"type": "整数",
"desc": "实例内存容量单位GB。",
"example": 1
},
{
"name": "RestrictState",
"type": "文本",
"desc": "实例业务状态。取值范围: NORMAL表示正常状态的实例 EXPIRED表示过期的实例 PROTECTIVELY_ISOLATED表示被安全隔离的实例。",
"example": "PROTECTIVELY_ISOLATED"
},
{
"name": "InstanceName",
"type": "文本",
"desc": "实例名称。",
"example": "test"
},
{
"name": "InstanceChargeType",
"type": "文本",
"desc": "实例计费模式。取值范围: PREPAID表示预付费即包年包月 POSTPAID_BY_HOUR表示后付费即按量计费 CDHPAID专用宿主机付费即只对专用宿主机计费不对专用宿主机上的实例计费。 SPOTPAID表示竞价实例付费。",
"example": "POSTPAID_BY_HOUR"
},
{
"name": "SystemDisk",
"type": "json",
"desc": "实例系统盘信息。",
"example": {
"DiskSize": 50,
"CdcId": null,
"DiskId": "disk-czsodtl1",
"DiskType": "CLOUD_SSD"
}
},
{
"name": "DataDisks",
"type": "json",
"desc": "实例数据盘信息。",
"example": [
{
"DeleteWithInstance": true,
"Encrypt": true,
"CdcId": null,
"DiskType": "CLOUD_SSD",
"ThroughputPerformance": 0,
"KmsKeyId": null,
"DiskSize": 50,
"SnapshotId": null,
"DiskId": "disk-bzsodtn1"
}
]
},
{
"name": "PrivateIpAddresses",
"type": "文本、多值",
"desc": "实例主网卡的内网IP列表。",
"example": [
"172.16.32.78"
]
},
{
"name": "PublicIpAddresses",
"type": "文本、多值",
"desc": "实例主网卡的公网IP列表。 注意:此字段可能返回 null表示取不到有效值。",
"example": [
"123.207.11.190"
]
},
{
"name": "InternetAccessible",
"type": "json",
"desc": "实例带宽信息。",
"example": {
"PublicIpAssigned": true,
"InternetChargeType": "TRAFFIC_POSTPAID_BY_HOUR",
"BandwidthPackageId": null,
"InternetMaxBandwidthOut": 1
}
},
{
"name": "VirtualPrivateCloud",
"type": "json",
"desc": "实例所属虚拟私有网络信息。",
"example": {
"SubnetId": "subnet-mv4sn55k",
"AsVpcGateway": false,
"Ipv6AddressCount": 1,
"VpcId": "vpc-m0cnatxj",
"PrivateIpAddresses": [
"172.16.3.59"
]
}
},
{
"name": "ImageId",
"type": "文本",
"desc": "生产实例所使用的镜像ID。",
"example": "img-8toqc6s3"
},
{
"name": "RenewFlag",
"type": "文本",
"desc": "自动续费标识。取值范围: NOTIFY_AND_MANUAL_RENEW表示通知即将过期但不自动续费 NOTIFY_AND_AUTO_RENEW表示通知即将过期而且自动续费 DISABLE_NOTIFY_AND_MANUAL_RENEW表示不通知即将过期也不自动续费。 注意后付费模式本项为null",
"example": "NOTIFY_AND_MANUAL_RENEW"
},
{
"name": "CreatedTime",
"type": "json",
"desc": "创建时间。按照ISO8601标准表示并且使用UTC时间。格式为YYYY-MM-DDThh:mm:ssZ。",
"example": "2020-09-22T00:00:00+00:00"
},
{
"name": "ExpiredTime",
"type": "json",
"desc": "到期时间。按照ISO8601标准表示并且使用UTC时间。格式为YYYY-MM-DDThh:mm:ssZ。注意后付费模式本项为null",
"example": "2020-09-22T00:00:00+00:00"
},
{
"name": "OsName",
"type": "文本",
"desc": "操作系统名称。",
"example": "CentOS 7.4 64bit"
},
{
"name": "SecurityGroupIds",
"type": "文本、多值",
"desc": "实例所属安全组。该参数可以通过调用 DescribeSecurityGroups 的返回值中的sgId字段来获取。",
"example": [
"sg-p1ezv4wz"
]
},
{
"name": "LoginSettings",
"type": "json",
"desc": "实例登录设置。目前只返回实例所关联的密钥。",
"example": {
"Password": "123qwe!@#QWE",
"KeepImageLogin": "False",
"KeyIds": [
"skey-b4vakk62"
]
}
},
{
"name": "InstanceState",
"type": "文本",
"desc": "实例状态。取值范围: PENDING表示创建中 LAUNCH_FAILED表示创建失败 RUNNING表示运行中 STOPPED表示关机 STARTING表示开机中 STOPPING表示关机中 REBOOTING表示重启中 SHUTDOWN表示停止待销毁 TERMINATING表示销毁中。",
"example": "RUNNING"
},
{
"name": "Tags",
"type": "json",
"desc": "实例关联的标签列表。",
"example": [
{
"Value": "test",
"Key": "test"
}
]
},
{
"name": "StopChargingMode",
"type": "文本",
"desc": "实例的关机计费模式。 取值范围: KEEP_CHARGING关机继续收费 STOP_CHARGING关机停止收费NOT_APPLICABLE实例处于非关机状态或者不适用关机停止计费的条件",
"example": "NOT_APPLICABLE"
},
{
"name": "Uuid",
"type": "文本",
"desc": "实例全局唯一ID",
"example": "e85f1388-0422-410d-8e50-bef540e78c18"
},
{
"name": "LatestOperation",
"type": "文本",
"desc": "实例的最新操作。例StopInstances、ResetInstance。 注意:此字段可能返回 null表示取不到有效值。",
"example": "ResetInstancesType"
},
{
"name": "LatestOperationState",
"type": "文本",
"desc": "实例的最新操作状态。取值范围: SUCCESS表示操作成功 OPERATING表示操作执行中 FAILED表示操作失败 注意:此字段可能返回 null表示取不到有效值。",
"example": "SUCCESS"
},
{
"name": "LatestOperationRequestId",
"type": "文本",
"desc": "实例最新操作的唯一请求 ID。 注意:此字段可能返回 null表示取不到有效值。",
"example": "c7de1287-061d-4ace-8caf-6ad8e5a2f29a"
},
{
"name": "DisasterRecoverGroupId",
"type": "文本",
"desc": "分散置放群组ID。 注意:此字段可能返回 null表示取不到有效值。",
"example": ""
},
{
"name": "IPv6Addresses",
"type": "文本、多值",
"desc": "实例的IPv6地址。 注意:此字段可能返回 null表示取不到有效值。",
"example": [
"2001:0db8:86a3:08d3:1319:8a2e:0370:7344"
]
},
{
"name": "CamRoleName",
"type": "文本",
"desc": "CAM角色名。 注意:此字段可能返回 null表示取不到有效值。",
"example": ""
},
{
"name": "HpcClusterId",
"type": "文本",
"desc": "高性能计算集群ID。 注意:此字段可能返回 null表示取不到有效值。",
"example": ""
},
{
"name": "RdmaIpAddresses",
"type": "文本、多值",
"desc": "高性能计算集群IP列表。 注意:此字段可能返回 null表示取不到有效值。",
"example": []
},
{
"name": "IsolatedSource",
"type": "文本",
"desc": "实例隔离类型。取值范围: ARREAR表示欠费隔离 EXPIRE表示到期隔离 MANMADE表示主动退还隔离 NOTISOLATED表示未隔离 注意:此字段可能返回 null表示取不到有效值。",
"example": "NOTISOLATED"
},
{
"name": "GPUInfo",
"type": "json",
"desc": "GPU信息。如果是gpu类型子机该值会返回GPU信息如果是其他类型子机则不返回。 注意:此字段可能返回 null表示取不到有效值。",
"example": null
},
{
"name": "LicenseType",
"type": "文本",
"desc": "实例的操作系统许可类型默认为TencentCloud",
"example": null
},
{
"name": "DisableApiTermination",
"type": "Boolean",
"desc": "实例销毁保护标志表示是否允许通过api接口删除实例。取值范围 TRUE表示开启实例保护不允许通过api接口删除实例 FALSE表示关闭实例保护允许通过api接口删除实例 默认取值FALSE。",
"example": null
},
{
"name": "DefaultLoginUser",
"type": "文本",
"desc": "默认登录用户。",
"example": null
},
{
"name": "DefaultLoginPort",
"type": "整数",
"desc": "默认登录端口。",
"example": null
},
{
"name": "LatestOperationErrorMsg",
"type": "文本",
"desc": "实例的最新操作错误信息。 注意:此字段可能返回 null表示取不到有效值。",
"example": null
}
]

View File

@ -0,0 +1,431 @@
# -*- coding:utf-8 -*-
from __future__ import unicode_literals
from flask import current_app
from api.extensions import cache
from api.lib.cmdb.custom_dashboard import CustomDashboardManager
from api.models.cmdb import Attribute
from api.models.cmdb import CIType
from api.models.cmdb import CITypeAttribute
from api.models.cmdb import RelationType
class AttributeCache(object):
PREFIX_ID = 'Field::ID::{0}'
PREFIX_NAME = 'Field::Name::{0}'
PREFIX_ALIAS = 'Field::Alias::{0}'
@classmethod
def get(cls, key):
if key is None:
return
attr = cache.get(cls.PREFIX_NAME.format(key))
attr = attr or cache.get(cls.PREFIX_ID.format(key))
attr = attr or cache.get(cls.PREFIX_ALIAS.format(key))
if attr is None:
attr = Attribute.get_by(name=key, first=True, to_dict=False)
attr = attr or Attribute.get_by_id(key)
attr = attr or Attribute.get_by(alias=key, first=True, to_dict=False)
if attr is not None:
cls.set(attr)
return attr
@classmethod
def set(cls, attr):
cache.set(cls.PREFIX_ID.format(attr.id), attr)
cache.set(cls.PREFIX_NAME.format(attr.name), attr)
cache.set(cls.PREFIX_ALIAS.format(attr.alias), attr)
@classmethod
def clean(cls, attr):
cache.delete(cls.PREFIX_ID.format(attr.id))
cache.delete(cls.PREFIX_NAME.format(attr.name))
cache.delete(cls.PREFIX_ALIAS.format(attr.alias))
class CITypeCache(object):
PREFIX_ID = "CIType::ID::{0}"
PREFIX_NAME = "CIType::Name::{0}"
PREFIX_ALIAS = "CIType::Alias::{0}"
@classmethod
def get(cls, key):
if key is None:
return
ct = cache.get(cls.PREFIX_NAME.format(key))
ct = ct or cache.get(cls.PREFIX_ID.format(key))
ct = ct or cache.get(cls.PREFIX_ALIAS.format(key))
if ct is None:
ct = CIType.get_by(name=key, first=True, to_dict=False)
ct = ct or CIType.get_by_id(key)
ct = ct or CIType.get_by(alias=key, first=True, to_dict=False)
if ct is not None:
cls.set(ct)
return ct
@classmethod
def set(cls, ct):
cache.set(cls.PREFIX_NAME.format(ct.name), ct)
cache.set(cls.PREFIX_ID.format(ct.id), ct)
cache.set(cls.PREFIX_ALIAS.format(ct.alias), ct)
@classmethod
def clean(cls, key):
ct = cls.get(key)
if ct is not None:
cache.delete(cls.PREFIX_NAME.format(ct.name))
cache.delete(cls.PREFIX_ID.format(ct.id))
cache.delete(cls.PREFIX_ALIAS.format(ct.alias))
class RelationTypeCache(object):
PREFIX_ID = "RelationType::ID::{0}"
PREFIX_NAME = "RelationType::Name::{0}"
@classmethod
def get(cls, key):
if key is None:
return
ct = cache.get(cls.PREFIX_NAME.format(key))
ct = ct or cache.get(cls.PREFIX_ID.format(key))
if ct is None:
ct = RelationType.get_by(name=key, first=True, to_dict=False) or RelationType.get_by_id(key)
if ct is not None:
cls.set(ct)
return ct
@classmethod
def set(cls, ct):
cache.set(cls.PREFIX_NAME.format(ct.name), ct)
cache.set(cls.PREFIX_ID.format(ct.id), ct)
@classmethod
def clean(cls, key):
ct = cls.get(key)
if ct is not None:
cache.delete(cls.PREFIX_NAME.format(ct.name))
cache.delete(cls.PREFIX_ID.format(ct.id))
class CITypeAttributesCache(object):
"""
key is type_id or type_name
"""
PREFIX_ID = "CITypeAttributes::TypeID::{0}"
PREFIX_NAME = "CITypeAttributes::TypeName::{0}"
PREFIX_ID2 = "CITypeAttributes2::TypeID::{0}"
PREFIX_NAME2 = "CITypeAttributes2::TypeName::{0}"
@classmethod
def get(cls, key):
if key is None:
return
attrs = cache.get(cls.PREFIX_NAME.format(key))
attrs = attrs or cache.get(cls.PREFIX_ID.format(key))
if not attrs:
attrs = CITypeAttribute.get_by(type_id=key, to_dict=False)
if not attrs:
ci_type = CIType.get_by(name=key, first=True, to_dict=False)
if ci_type is not None:
attrs = CITypeAttribute.get_by(type_id=ci_type.id, to_dict=False)
if attrs is not None:
cls.set(key, attrs)
return attrs
@classmethod
def get2(cls, key):
"""
return [(type_attr, attr), ]
:param key:
:return:
"""
if key is None:
return
attrs = cache.get(cls.PREFIX_NAME2.format(key))
attrs = attrs or cache.get(cls.PREFIX_ID2.format(key))
if not attrs:
attrs = CITypeAttribute.get_by(type_id=key, to_dict=False)
if not attrs:
ci_type = CIType.get_by(name=key, first=True, to_dict=False)
if ci_type is not None:
attrs = CITypeAttribute.get_by(type_id=ci_type.id, to_dict=False)
if attrs is not None:
attrs = [(i, AttributeCache.get(i.attr_id)) for i in attrs]
cls.set2(key, attrs)
return attrs
@classmethod
def set(cls, key, values):
ci_type = CITypeCache.get(key)
if ci_type is not None:
cache.set(cls.PREFIX_ID.format(ci_type.id), values)
cache.set(cls.PREFIX_NAME.format(ci_type.name), values)
@classmethod
def set2(cls, key, values):
ci_type = CITypeCache.get(key)
if ci_type is not None:
cache.set(cls.PREFIX_ID2.format(ci_type.id), values)
cache.set(cls.PREFIX_NAME2.format(ci_type.name), values)
@classmethod
def clean(cls, key):
ci_type = CITypeCache.get(key)
attrs = cls.get(key)
if attrs is not None and ci_type:
cache.delete(cls.PREFIX_ID.format(ci_type.id))
cache.delete(cls.PREFIX_NAME.format(ci_type.name))
attrs2 = cls.get2(key)
if attrs2 is not None and ci_type:
cache.delete(cls.PREFIX_ID2.format(ci_type.id))
cache.delete(cls.PREFIX_NAME2.format(ci_type.name))
class CITypeAttributeCache(object):
"""
key is type_id & attr_id
"""
PREFIX_ID = "CITypeAttribute::TypeID::{0}::AttrID::{1}"
@classmethod
def get(cls, type_id, attr_id):
attr = cache.get(cls.PREFIX_ID.format(type_id, attr_id))
attr = attr or cache.get(cls.PREFIX_ID.format(type_id, attr_id))
attr = attr or CITypeAttribute.get_by(type_id=type_id, attr_id=attr_id, first=True, to_dict=False)
if attr is not None:
cls.set(type_id, attr_id, attr)
return attr
@classmethod
def set(cls, type_id, attr_id, attr):
cache.set(cls.PREFIX_ID.format(type_id, attr_id), attr)
@classmethod
def clean(cls, type_id, attr_id):
cache.delete(cls.PREFIX_ID.format(type_id, attr_id))
class CMDBCounterCache(object):
KEY = 'CMDB::Counter'
@classmethod
def get(cls):
result = cache.get(cls.KEY) or {}
if not result:
result = cls.reset()
return result
@classmethod
def set(cls, result):
cache.set(cls.KEY, result, timeout=0)
@classmethod
def reset(cls):
customs = CustomDashboardManager.get()
result = {}
for custom in customs:
if custom['category'] == 0:
res = cls.sum_counter(custom)
elif custom['category'] == 1:
res = cls.attribute_counter(custom)
else:
res = cls.relation_counter(custom.get('type_id'),
custom.get('level'),
custom.get('options', {}).get('filter', ''),
custom.get('options', {}).get('type_ids', ''))
if res:
result[custom['id']] = res
cls.set(result)
return result
@classmethod
def update(cls, custom, flush=True):
result = cache.get(cls.KEY) or {}
if not result:
result = cls.reset()
if custom['category'] == 0:
res = cls.sum_counter(custom)
elif custom['category'] == 1:
res = cls.attribute_counter(custom)
else:
res = cls.relation_counter(custom.get('type_id'),
custom.get('level'),
custom.get('options', {}).get('filter', ''),
custom.get('options', {}).get('type_ids', ''))
if res and flush:
result[custom['id']] = res
cls.set(result)
return res
@staticmethod
def relation_counter(type_id, level, other_filer, type_ids):
from api.lib.cmdb.search.ci_relation.search import Search as RelSearch
from api.lib.cmdb.search import SearchError
from api.lib.cmdb.search.ci import search
query = "_type:{}".format(type_id)
s = search(query, count=1000000)
try:
type_names, _, _, _, _, _ = s.search()
except SearchError as e:
current_app.logger.error(e)
return
type_id_names = [(str(i.get('_id')), i.get(i.get('unique'))) for i in type_names]
s = RelSearch([i[0] for i in type_id_names], level, other_filer or '')
try:
stats = s.statistics(type_ids)
except SearchError as e:
current_app.logger.error(e)
return
id2name = dict(type_id_names)
type_ids = set()
for i in (stats.get('detail') or []):
for j in stats['detail'][i]:
type_ids.add(j)
for type_id in type_ids:
_type = CITypeCache.get(type_id)
id2name[type_id] = _type and _type.alias
result = dict(summary={}, detail={})
for i in stats:
if i == "detail":
for j in stats['detail']:
if id2name[j]:
result['detail'][id2name[j]] = stats['detail'][j]
result['detail'][id2name[j]] = dict()
for _j in stats['detail'][j]:
result['detail'][id2name[j]][id2name[_j]] = stats['detail'][j][_j]
elif id2name.get(i):
result['summary'][id2name[i]] = stats[i]
return result
@staticmethod
def attribute_counter(custom):
from api.lib.cmdb.search import SearchError
from api.lib.cmdb.search.ci import search
from api.lib.cmdb.utils import ValueTypeMap
custom.setdefault('options', {})
type_id = custom.get('type_id')
attr_id = custom.get('attr_id')
type_ids = custom['options'].get('type_ids') or (type_id and [type_id])
attr_ids = list(map(str, custom['options'].get('attr_ids') or (attr_id and [attr_id])))
try:
attr2value_type = [AttributeCache.get(i).value_type for i in attr_ids]
except AttributeError:
return
other_filter = custom['options'].get('filter')
other_filter = "{}".format(other_filter) if other_filter else ''
if custom['options'].get('ret') == 'cis':
query = "_type:({}),{}".format(";".join(map(str, type_ids)), other_filter)
s = search(query, fl=attr_ids, ret_key='alias', count=100)
try:
cis, _, _, _, _, _ = s.search()
except SearchError as e:
current_app.logger.error(e)
return
return cis
result = dict()
# level = 1
query = "_type:({}),{}".format(";".join(map(str, type_ids)), other_filter)
s = search(query, fl=attr_ids, facet=[attr_ids[0]], count=1)
try:
_, _, _, _, _, facet = s.search()
except SearchError as e:
current_app.logger.error(e)
return
for i in (list(facet.values()) or [[]])[0]:
result[ValueTypeMap.serialize2[attr2value_type[0]](str(i[0]))] = i[1]
if len(attr_ids) == 1:
return result
# level = 2
for v in result:
query = "_type:({}),{},{}:{}".format(";".join(map(str, type_ids)), other_filter, attr_ids[0], v)
s = search(query, fl=attr_ids, facet=[attr_ids[1]], count=1)
try:
_, _, _, _, _, facet = s.search()
except SearchError as e:
current_app.logger.error(e)
return
result[v] = dict()
for i in (list(facet.values()) or [[]])[0]:
result[v][ValueTypeMap.serialize2[attr2value_type[1]](str(i[0]))] = i[1]
if len(attr_ids) == 2:
return result
# level = 3
for v1 in result:
if not isinstance(result[v1], dict):
continue
for v2 in result[v1]:
query = "_type:({}),{},{}:{},{}:{}".format(";".join(map(str, type_ids)), other_filter,
attr_ids[0], v1, attr_ids[1], v2)
s = search(query, fl=attr_ids, facet=[attr_ids[2]], count=1)
try:
_, _, _, _, _, facet = s.search()
except SearchError as e:
current_app.logger.error(e)
return
result[v1][v2] = dict()
for i in (list(facet.values()) or [[]])[0]:
result[v1][v2][ValueTypeMap.serialize2[attr2value_type[2]](str(i[0]))] = i[1]
return result
@staticmethod
def sum_counter(custom):
from api.lib.cmdb.search import SearchError
from api.lib.cmdb.search.ci import search
custom.setdefault('options', {})
type_id = custom.get('type_id')
type_ids = custom['options'].get('type_ids') or (type_id and [type_id])
other_filter = custom['options'].get('filter') or ''
query = "_type:({}),{}".format(";".join(map(str, type_ids)), other_filter)
s = search(query, count=1)
try:
_, _, _, _, numfound, _ = s.search()
except SearchError as e:
current_app.logger.error(e)
return
return numfound

1096
cmdb-api/api/lib/cmdb/ci.py Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,105 @@
# -*- coding:utf-8 -*-
from api.lib.utils import BaseEnum
class ValueTypeEnum(BaseEnum):
INT = "0"
FLOAT = "1"
TEXT = "2"
DATETIME = "3"
DATE = "4"
TIME = "5"
JSON = "6"
class ConstraintEnum(BaseEnum):
One2Many = "0"
One2One = "1"
Many2Many = "2"
class CIStatusEnum(BaseEnum):
REVIEW = "0"
VALIDATE = "1"
class ExistPolicy(BaseEnum):
REJECT = "reject"
NEED = "need"
IGNORE = "ignore"
REPLACE = "replace"
class OperateType(BaseEnum):
ADD = "0"
DELETE = "1"
UPDATE = "2"
class CITypeOperateType(BaseEnum):
ADD = "0" # 新增模型
UPDATE = "1" # 修改模型
DELETE = "2" # 删除模型
ADD_ATTRIBUTE = "3" # 新增属性
UPDATE_ATTRIBUTE = "4" # 修改属性
DELETE_ATTRIBUTE = "5" # 删除属性
ADD_TRIGGER = "6" # 新增触发器
UPDATE_TRIGGER = "7" # 修改触发器
DELETE_TRIGGER = "8" # 删除触发器
ADD_UNIQUE_CONSTRAINT = "9" # 新增联合唯一
UPDATE_UNIQUE_CONSTRAINT = "10" # 修改联合唯一
DELETE_UNIQUE_CONSTRAINT = "11" # 删除联合唯一
ADD_RELATION = "12" # 新增关系
DELETE_RELATION = "13" # 删除关系
class RetKey(BaseEnum):
ID = "id"
NAME = "name"
ALIAS = "alias"
class ResourceTypeEnum(BaseEnum):
CI = "CIType"
CI_TYPE = "CIType" # create/update/delete/read/config/grant
CI_TYPE_RELATION = "CITypeRelation" # create/delete/grant
RELATION_VIEW = "RelationView" # read/update/delete/grant
CI_FILTER = "CIFilter" # read
class PermEnum(BaseEnum):
ADD = "create"
UPDATE = "update"
DELETE = "delete"
READ = "read"
CONFIG = "config"
GRANT = "grant"
class RoleEnum(BaseEnum):
CONFIG = "cmdb_admin"
CMDB_READ_ALL = "CMDB_READ_ALL"
class AutoDiscoveryType(BaseEnum):
AGENT = "agent"
SNMP = "snmp"
HTTP = "http"
class AttributeDefaultValueEnum(BaseEnum):
CREATED_AT = "$created_at"
UPDATED_AT = "$updated_at"
AUTO_INC_ID = "$auto_inc_id"
CMDB_QUEUE = "one_cmdb_async"
REDIS_PREFIX_CI = "ONE_CMDB"
REDIS_PREFIX_CI_RELATION = "CMDB_CI_RELATION"
BUILTIN_KEYWORDS = {'id', '_id', 'ci_id', 'type', '_type', 'ci_type'}
L_TYPE = None
L_CI = None

View File

@ -0,0 +1,82 @@
# -*- coding:utf-8 -*-
from flask import abort
from api.lib.cmdb.resp_format import ErrFormat
from api.models.cmdb import CustomDashboard
from api.models.cmdb import SystemConfig
class CustomDashboardManager(object):
cls = CustomDashboard
@staticmethod
def get():
return sorted(CustomDashboard.get_by(to_dict=True), key=lambda x: (x["category"], x['order']))
@staticmethod
def preview(**kwargs):
from api.lib.cmdb.cache import CMDBCounterCache
res = CMDBCounterCache.update(kwargs, flush=False)
return res
@staticmethod
def add(**kwargs):
from api.lib.cmdb.cache import CMDBCounterCache
if kwargs.get('name'):
CustomDashboard.get_by(name=kwargs['name']) and abort(400, ErrFormat.custom_name_duplicate)
new = CustomDashboard.create(**kwargs)
res = CMDBCounterCache.update(new.to_dict())
return new, res
@staticmethod
def update(_id, **kwargs):
from api.lib.cmdb.cache import CMDBCounterCache
existed = CustomDashboard.get_by_id(_id) or abort(404, ErrFormat.not_found)
new = existed.update(**kwargs)
res = CMDBCounterCache.update(new.to_dict())
return new, res
@staticmethod
def batch_update(id2options):
for _id in id2options:
existed = CustomDashboard.get_by_id(_id) or abort(404, ErrFormat.not_found)
existed.update(options=id2options[_id])
@staticmethod
def delete(_id):
existed = CustomDashboard.get_by_id(_id) or abort(404, ErrFormat.not_found)
existed.soft_delete()
class SystemConfigManager(object):
cls = SystemConfig
@staticmethod
def get(name):
return SystemConfig.get_by(name=name, first=True, to_dict=True)
@staticmethod
def create_or_update(name, option):
existed = SystemConfig.get_by(name=name, first=True, to_dict=False)
if existed is not None:
return existed.update(option=option)
else:
return SystemConfig.create(name=name, option=option)
@staticmethod
def delete(name):
existed = SystemConfig.get_by(name=name, first=True, to_dict=False) or abort(404, ErrFormat.not_found)
existed.soft_delete()

View File

@ -0,0 +1,354 @@
# -*- coding:utf-8 -*-
import json
from flask import abort
from flask_login import current_user
from api.extensions import db
from api.lib.cmdb.cache import AttributeCache
from api.lib.cmdb.cache import RelationTypeCache
from api.lib.cmdb.const import OperateType
from api.lib.cmdb.perms import CIFilterPermsCRUD
from api.lib.cmdb.resp_format import ErrFormat
from api.lib.perm.acl.cache import UserCache
from api.models.cmdb import Attribute
from api.models.cmdb import AttributeHistory
from api.models.cmdb import CIRelationHistory
from api.models.cmdb import CITriggerHistory
from api.models.cmdb import CITypeHistory
from api.models.cmdb import CITypeTrigger
from api.models.cmdb import CITypeUniqueConstraint
from api.models.cmdb import OperationRecord
class AttributeHistoryManger(object):
@staticmethod
def get_records_for_attributes(start, end, username, page, page_size, operate_type, type_id,
ci_id=None, attr_id=None):
records = db.session.query(OperationRecord, AttributeHistory).join(
AttributeHistory, OperationRecord.id == AttributeHistory.record_id)
if start:
records = records.filter(OperationRecord.created_at >= start)
if end:
records = records.filter(OperationRecord.created_at <= end)
if type_id:
records = records.filter(OperationRecord.type_id == type_id)
if username:
user = UserCache.get(username)
if user:
records = records.filter(OperationRecord.uid == user.uid)
else:
return abort(404, ErrFormat.user_not_found.format(username))
if operate_type:
records = records.filter(AttributeHistory.operate_type == operate_type)
if ci_id is not None:
records = records.filter(AttributeHistory.ci_id == ci_id)
if attr_id is not None:
records = records.filter(AttributeHistory.attr_id == attr_id)
records = records.order_by(AttributeHistory.id.desc()).offset(page_size * (page - 1)).limit(page_size).all()
total = len(records)
res = {}
for record in records:
record_id = record.OperationRecord.id
attr_hist = record.AttributeHistory.to_dict()
attr_hist['attr'] = AttributeCache.get(attr_hist['attr_id'])
if attr_hist['attr']:
attr_hist['attr_name'] = attr_hist['attr'].name
attr_hist['attr_alias'] = attr_hist['attr'].alias
attr_hist.pop("attr")
if record_id not in res:
record_dict = record.OperationRecord.to_dict()
record_dict["user"] = UserCache.get(record_dict.get("uid"))
if record_dict["user"]:
record_dict['user'] = record_dict['user'].nickname
res[record_id] = [record_dict, [attr_hist]]
else:
res[record_id][1].append(attr_hist)
attr_filter = CIFilterPermsCRUD.get_attr_filter(type_id)
if attr_filter:
res = [i for i in res if i.get('attr_name') in attr_filter]
res = [res[i] for i in sorted(res.keys(), reverse=True)]
return total, res
@staticmethod
def get_records_for_relation(start, end, username, page, page_size, operate_type, type_id,
first_ci_id=None, second_ci_id=None):
records = db.session.query(OperationRecord, CIRelationHistory).join(
CIRelationHistory, OperationRecord.id == CIRelationHistory.record_id)
if start:
records = records.filter(OperationRecord.created_at >= start)
if end:
records = records.filter(OperationRecord.created_at <= end)
if type_id:
records = records.filter(OperationRecord.type_id == type_id)
if username:
user = UserCache.get(username)
if user:
records = records.filter(OperationRecord.uid == user.uid)
else:
return abort(404, ErrFormat.user_not_found.format(username))
if operate_type:
records = records.filter(CIRelationHistory.operate_type == operate_type)
if first_ci_id is not None:
records = records.filter(CIRelationHistory.first_ci_id == first_ci_id)
if second_ci_id is not None:
records = records.filter(CIRelationHistory.second_ci_id == second_ci_id)
records = records.order_by(CIRelationHistory.id.desc()).offset(page_size * (page - 1)).limit(page_size).all()
total = len(records)
res = {}
ci_ids = set()
for record in records:
record_id = record.OperationRecord.id
rel_hist = record.CIRelationHistory.to_dict()
ci_ids.add(rel_hist['first_ci_id'])
ci_ids.add(rel_hist['second_ci_id'])
if record_id not in res:
record_dict = record.OperationRecord.to_dict()
record_dict["user"] = UserCache.get(record_dict.get("uid"))
if record_dict["user"]:
record_dict['user'] = record_dict['user'].nickname
res[record_id] = [record_dict, [rel_hist]]
else:
res[record_id][1].append(rel_hist)
res = [res[i] for i in sorted(res.keys(), reverse=True)]
from api.lib.cmdb.ci import CIManager
cis = CIManager().get_cis_by_ids(list(ci_ids),
unique_required=True)
cis = {i['_id']: i for i in cis}
return total, res, cis
@staticmethod
def get_by_ci_id(ci_id):
res = db.session.query(AttributeHistory, Attribute, OperationRecord).join(
Attribute, Attribute.id == AttributeHistory.attr_id).join(
OperationRecord, OperationRecord.id == AttributeHistory.record_id).filter(
AttributeHistory.ci_id == ci_id).order_by(AttributeHistory.id.desc())
from api.lib.cmdb.ci import CIManager
ci = CIManager.get_by_id(ci_id)
attr_filter = CIFilterPermsCRUD.get_attr_filter(ci.type_id) if ci else None
result = []
for i in res:
attr = i.Attribute
if attr_filter and attr.name not in attr_filter:
continue
user = UserCache.get(i.OperationRecord.uid)
hist = i.AttributeHistory
record = i.OperationRecord
item = dict(attr_name=attr.name,
attr_alias=attr.alias,
operate_type=hist.operate_type,
username=user and user.nickname,
old=hist.old,
new=hist.new,
created_at=record.created_at.strftime('%Y-%m-%d %H:%M:%S'),
record_id=record.id,
hid=hist.id
)
result.append(item)
return result
@staticmethod
def get_record_detail(record_id):
from api.lib.cmdb.ci import CIManager
record = (OperationRecord.get_by_id(record_id) or
abort(404, ErrFormat.record_not_found.format("id={}".format(record_id))))
username = UserCache.get(record.uid).nickname or UserCache.get(record.uid).username
timestamp = record.created_at.strftime("%Y-%m-%d %H:%M:%S")
attr_history = AttributeHistory.get_by(record_id=record_id, to_dict=False)
rel_history = CIRelationHistory.get_by(record_id=record_id, to_dict=False)
attr_dict, rel_dict = dict(), {"add": [], "delete": []}
for attr_h in attr_history:
attr_dict[AttributeCache.get(attr_h.attr_id).alias] = dict(
old=attr_h.old,
new=attr_h.new,
operate_type=attr_h.operate_type)
for rel_h in rel_history:
first = CIManager.get_ci_by_id(rel_h.first_ci_id)
second = CIManager.get_ci_by_id(rel_h.second_ci_id)
rel_dict[rel_h.operate_type].append((first, RelationTypeCache.get(rel_h.relation_type_id).name, second))
return username, timestamp, attr_dict, rel_dict
@staticmethod
def add(record_id, ci_id, history_list, type_id=None, flush=False, commit=True):
if record_id is None:
record = OperationRecord.create(uid=current_user.uid, type_id=type_id)
record_id = record.id
for attr_id, operate_type, old, new in history_list or []:
AttributeHistory.create(attr_id=attr_id,
operate_type=operate_type,
old=json.dumps(old) if isinstance(old, (dict, list)) else old,
new=json.dumps(new) if isinstance(new, (dict, list)) else new,
ci_id=ci_id,
record_id=record_id,
flush=flush,
commit=commit)
return record_id
class CIRelationHistoryManager(object):
@staticmethod
def add(rel_obj, operate_type=OperateType.ADD):
record = OperationRecord.create(uid=current_user.uid)
CIRelationHistory.create(relation_id=rel_obj.id,
record_id=record.id,
operate_type=operate_type,
first_ci_id=rel_obj.first_ci_id,
second_ci_id=rel_obj.second_ci_id,
relation_type_id=rel_obj.relation_type_id)
class CITypeHistoryManager(object):
@staticmethod
def get(page, page_size, username=None, type_id=None, operate_type=None):
query = CITypeHistory.get_by(only_query=True)
if type_id is not None:
query = query.filter(CITypeHistory.type_id == type_id)
if username:
user = UserCache.get(username)
if user:
query = query.filter(CITypeHistory.uid == user.uid)
else:
return abort(404, ErrFormat.user_not_found.format(username))
if operate_type is not None:
query = query.filter(CITypeHistory.operate_type == operate_type)
numfound = query.count()
query = query.order_by(CITypeHistory.id.desc())
result = query.offset((page - 1) * page_size).limit(page_size)
result = [i.to_dict() for i in result]
for res in result:
res["user"] = UserCache.get(res.get("uid"))
if res["user"]:
res['user'] = res['user'].nickname
if res.get('attr_id'):
attr = AttributeCache.get(res['attr_id'])
res['attr'] = attr and attr.to_dict()
elif res.get('trigger_id'):
trigger = CITypeTrigger.get_by_id(res['trigger_id'])
res['trigger'] = trigger and trigger.to_dict()
elif res.get('unique_constraint_id'):
unique_constraint = CITypeUniqueConstraint.get_by_id(res['unique_constraint_id'])
res['unique_constraint'] = unique_constraint and unique_constraint.to_dict()
return numfound, result
@staticmethod
def add(operate_type, type_id, attr_id=None, trigger_id=None, unique_constraint_id=None, change=None):
if type_id is None and attr_id is not None:
from api.models.cmdb import CITypeAttribute
type_ids = [i.type_id for i in CITypeAttribute.get_by(attr_id=attr_id, to_dict=False)]
else:
type_ids = [type_id]
for _type_id in type_ids:
payload = dict(operate_type=operate_type,
type_id=_type_id,
uid=current_user.uid,
attr_id=attr_id,
trigger_id=trigger_id,
unique_constraint_id=unique_constraint_id,
change=change)
CITypeHistory.create(**payload)
class CITriggerHistoryManager(object):
@staticmethod
def get(page, page_size, type_id=None, trigger_id=None, operate_type=None):
query = CITriggerHistory.get_by(only_query=True)
if type_id:
query = query.filter(CITriggerHistory.type_id == type_id)
if trigger_id:
query = query.filter(CITriggerHistory.trigger_id == trigger_id)
if operate_type:
query = query.filter(CITriggerHistory.operate_type == operate_type)
numfound = query.count()
query = query.order_by(CITriggerHistory.id.desc())
result = query.offset((page - 1) * page_size).limit(page_size)
result = [i.to_dict() for i in result]
for res in result:
if res.get('trigger_id'):
trigger = CITypeTrigger.get_by_id(res['trigger_id'])
res['trigger'] = trigger and trigger.to_dict()
return numfound, result
@staticmethod
def get_by_ci_id(ci_id):
res = db.session.query(CITriggerHistory, CITypeTrigger).join(
CITypeTrigger, CITypeTrigger.id == CITriggerHistory.trigger_id).filter(
CITriggerHistory.ci_id == ci_id).order_by(CITriggerHistory.id.desc())
result = []
id2trigger = dict()
for i in res:
hist = i.CITriggerHistory
item = dict(is_ok=hist.is_ok,
operate_type=hist.operate_type,
notify=hist.notify,
trigger_id=hist.trigger_id,
trigger_name=hist.trigger_name,
webhook=hist.webhook,
created_at=hist.created_at.strftime('%Y-%m-%d %H:%M:%S'),
record_id=hist.record_id,
hid=hist.id
)
if i.CITypeTrigger.id not in id2trigger:
id2trigger[i.CITypeTrigger.id] = i.CITypeTrigger.to_dict()
result.append(item)
return dict(items=result, id2trigger=id2trigger)
@staticmethod
def add(operate_type, record_id, ci_id, trigger_id, trigger_name, is_ok=False, notify=None, webhook=None):
CITriggerHistory.create(operate_type=operate_type,
record_id=record_id,
ci_id=ci_id,
trigger_id=trigger_id,
trigger_name=trigger_name,
is_ok=is_ok,
notify=notify,
webhook=webhook)

View File

@ -0,0 +1,177 @@
# -*- coding:utf-8 -*-
import functools
from flask import abort
from flask import current_app
from flask import request
from flask_login import current_user
from api.lib.cmdb.const import ResourceTypeEnum
from api.lib.cmdb.resp_format import ErrFormat
from api.lib.mixin import DBMixin
from api.lib.perm.acl.acl import ACLManager
from api.lib.perm.acl.acl import is_app_admin
from api.lib.perm.acl.acl import validate_permission
from api.models.cmdb import CIFilterPerms
class CIFilterPermsCRUD(DBMixin):
cls = CIFilterPerms
def get(self, type_id):
res = self.cls.get_by(type_id=type_id, to_dict=True)
result = {}
for i in res:
if i['attr_filter']:
i['attr_filter'] = i['attr_filter'].split(',')
if i['rid'] not in result:
result[i['rid']] = i
else:
if i['attr_filter']:
if not result[i['rid']]['attr_filter']:
result[i['rid']]['attr_filter'] = []
result[i['rid']]['attr_filter'] += i['attr_filter']
result[i['rid']]['attr_filter'] = list(set(i['attr_filter']))
if i['ci_filter']:
if not result[i['rid']]['ci_filter']:
result[i['rid']]['ci_filter'] = ""
result[i['rid']]['ci_filter'] += (i['ci_filter'] or "")
return result
def get_by_ids(self, _ids, type_id=None):
if not _ids:
return {}
if type_id is not None:
res = self.cls.get_by(type_id=type_id, __func_in___key_id=_ids, to_dict=True)
else:
res = self.cls.get_by(__func_in___key_id=_ids, to_dict=True)
result = {}
for i in res:
if i['attr_filter']:
i['attr_filter'] = i['attr_filter'].split(',')
if i['type_id'] not in result:
result[i['type_id']] = i
else:
if i['attr_filter']:
if not result[i['type_id']]['attr_filter']:
result[i['type_id']]['attr_filter'] = []
result[i['type_id']]['attr_filter'] += i['attr_filter']
result[i['type_id']]['attr_filter'] = list(set(i['attr_filter']))
if i['ci_filter']:
if not result[i['type_id']]['ci_filter']:
result[i['type_id']]['ci_filter'] = ""
result[i['type_id']]['ci_filter'] += (i['ci_filter'] or "")
return result
@classmethod
def get_attr_filter(cls, type_id):
if is_app_admin('cmdb') or current_user.username in ('worker', 'cmdb_agent'):
return []
res2 = ACLManager('cmdb').get_resources(ResourceTypeEnum.CI_FILTER)
if res2:
type2filter_perms = cls().get_by_ids(list(map(int, [i['name'] for i in res2])), type_id=type_id)
return type2filter_perms.get(type_id, {}).get('attr_filter') or []
def _can_add(self, **kwargs):
ci_filter = kwargs.get('ci_filter')
attr_filter = kwargs.get('attr_filter') or ""
if 'attr_filter' in kwargs:
kwargs['attr_filter'] = kwargs['attr_filter'] or None
if attr_filter:
kwargs['attr_filter'] = ','.join(attr_filter or [])
if ci_filter and not kwargs.get('name'):
return abort(400, ErrFormat.ci_filter_name_cannot_be_empty)
if ci_filter and ci_filter.startswith('q='):
kwargs['ci_filter'] = kwargs['ci_filter'][2:]
return kwargs
def add(self, **kwargs):
kwargs = self._can_add(**kwargs) or kwargs
obj = self.cls.get_by(type_id=kwargs.get('type_id'),
rid=kwargs.get('rid'),
first=True, to_dict=False)
if obj is not None:
obj = obj.update(filter_none=False, **kwargs)
if not obj.attr_filter and not obj.ci_filter:
if current_app.config.get('USE_ACL'):
ACLManager().del_resource(str(obj.id), ResourceTypeEnum.CI_FILTER)
obj.soft_delete()
else:
if not kwargs.get('ci_filter') and not kwargs.get('attr_filter'):
return
obj = self.cls.create(**kwargs)
if current_app.config.get('USE_ACL'):
try:
ACLManager().add_resource(obj.id, ResourceTypeEnum.CI_FILTER)
except:
pass
ACLManager().grant_resource_to_role_by_rid(obj.id,
kwargs.get('rid'),
ResourceTypeEnum.CI_FILTER)
return obj
def _can_update(self, **kwargs):
pass
def _can_delete(self, **kwargs):
pass
def delete(self, **kwargs):
obj = self.cls.get_by(type_id=kwargs.get('type_id'),
rid=kwargs.get('rid'),
first=True, to_dict=False)
if obj is not None:
if current_app.config.get('USE_ACL'):
ACLManager().del_resource(str(obj.id), ResourceTypeEnum.CI_FILTER)
obj.soft_delete()
def has_perm_for_ci(arg_name, resource_type, perm, callback=None, app=None):
def decorator_has_perm(func):
@functools.wraps(func)
def wrapper_has_perm(*args, **kwargs):
if not arg_name:
return
resource = request.view_args.get(arg_name) or request.values.get(arg_name)
if callback is not None and resource:
resource = callback(resource)
if current_app.config.get("USE_ACL") and resource:
if current_user.username == "worker" or current_user.username == "cmdb_agent":
request.values['__is_admin'] = True
return func(*args, **kwargs)
if is_app_admin(app):
request.values['__is_admin'] = True
return func(*args, **kwargs)
validate_permission(resource.name, resource_type, perm, app)
return func(*args, **kwargs)
return wrapper_has_perm
return decorator_has_perm

View File

@ -0,0 +1,340 @@
# -*- coding:utf-8 -*-
import copy
import six
import toposort
from flask import abort
from flask import current_app
from flask_login import current_user
from api.extensions import db
from api.lib.cmdb.attribute import AttributeManager
from api.lib.cmdb.cache import AttributeCache
from api.lib.cmdb.cache import CITypeAttributesCache
from api.lib.cmdb.cache import CITypeCache
from api.lib.cmdb.const import PermEnum, ResourceTypeEnum, RoleEnum
from api.lib.cmdb.perms import CIFilterPermsCRUD
from api.lib.cmdb.resp_format import ErrFormat
from api.lib.exception import AbortException
from api.lib.perm.acl.acl import ACLManager
from api.models.cmdb import CITypeAttribute
from api.models.cmdb import CITypeRelation
from api.models.cmdb import PreferenceRelationView
from api.models.cmdb import PreferenceSearchOption
from api.models.cmdb import PreferenceShowAttributes
from api.models.cmdb import PreferenceTreeView
class PreferenceManager(object):
pref_attr_cls = PreferenceShowAttributes
pref_tree_cls = PreferenceTreeView
pref_rel_cls = PreferenceRelationView
pre_so_cls = PreferenceSearchOption
@staticmethod
def get_types(instance=False, tree=False):
types = db.session.query(PreferenceShowAttributes.type_id).filter(
PreferenceShowAttributes.uid == current_user.uid).filter(
PreferenceShowAttributes.deleted.is_(False)).group_by(
PreferenceShowAttributes.type_id).all() if instance else []
tree_types = PreferenceTreeView.get_by(uid=current_user.uid, to_dict=False) if tree else []
type_ids = set([i.type_id for i in types + tree_types])
return [CITypeCache.get(type_id).to_dict() for type_id in type_ids]
@staticmethod
def get_types2(instance=False, tree=False):
"""
{
self: {instance: [], tree: [], type_id2subs_time: {type_id: subs_time}},
type_id2users: {type_id: []}
}
:param instance:
:param tree:
:return:
"""
result = dict(self=dict(instance=[], tree=[], type_id2subs_time=dict()),
type_id2users=dict())
if instance:
types = db.session.query(PreferenceShowAttributes.type_id,
PreferenceShowAttributes.uid, PreferenceShowAttributes.created_at).filter(
PreferenceShowAttributes.deleted.is_(False)).group_by(
PreferenceShowAttributes.uid, PreferenceShowAttributes.type_id)
for i in types:
if i.uid == current_user.uid:
result['self']['instance'].append(i.type_id)
if str(i.created_at) > str(result['self']['type_id2subs_time'].get(i.type_id, "")):
result['self']['type_id2subs_time'][i.type_id] = i.created_at
result['type_id2users'].setdefault(i.type_id, []).append(i.uid)
if tree:
types = PreferenceTreeView.get_by(to_dict=False)
for i in types:
if i.uid == current_user.uid:
result['self']['tree'].append(i.type_id)
if str(i.created_at) > str(result['self']['type_id2subs_time'].get(i.type_id, "")):
result['self']['type_id2subs_time'][i.type_id] = i.created_at
result['type_id2users'].setdefault(i.type_id, [])
if i.uid not in result['type_id2users'][i.type_id]:
result['type_id2users'][i.type_id].append(i.uid)
return result
@staticmethod
def get_show_attributes(type_id):
if not isinstance(type_id, six.integer_types):
_type = CITypeCache.get(type_id)
type_id = _type and _type.id
attrs = db.session.query(PreferenceShowAttributes, CITypeAttribute.order).join(
CITypeAttribute, CITypeAttribute.attr_id == PreferenceShowAttributes.attr_id).filter(
PreferenceShowAttributes.uid == current_user.uid).filter(
PreferenceShowAttributes.type_id == type_id).filter(
PreferenceShowAttributes.deleted.is_(False)).filter(CITypeAttribute.deleted.is_(False)).filter(
CITypeAttribute.type_id == type_id).all()
result = []
for i in sorted(attrs, key=lambda x: x.PreferenceShowAttributes.order):
item = i.PreferenceShowAttributes.attr.to_dict()
item.update(dict(is_fixed=i.PreferenceShowAttributes.is_fixed))
result.append(item)
is_subscribed = True
if not attrs:
attrs = db.session.query(CITypeAttribute).filter(
CITypeAttribute.type_id == type_id).filter(
CITypeAttribute.deleted.is_(False)).filter(
CITypeAttribute.default_show.is_(True)).order_by(CITypeAttribute.order)
result = [i.attr.to_dict() for i in attrs]
is_subscribed = False
for i in result:
if i["is_choice"]:
i.update(dict(choice_value=AttributeManager.get_choice_values(
i["id"], i["value_type"], i["choice_web_hook"], i.get("choice_other"))))
return is_subscribed, result
@classmethod
def create_or_update_show_attributes(cls, type_id, attr_order):
existed_all = PreferenceShowAttributes.get_by(type_id=type_id, uid=current_user.uid, to_dict=False)
for x, order in attr_order:
if isinstance(x, list):
_attr, is_fixed = x
else:
_attr, is_fixed = x, False
attr = AttributeCache.get(_attr) or abort(404, ErrFormat.attribute_not_found.format("id={}".format(_attr)))
existed = PreferenceShowAttributes.get_by(type_id=type_id,
uid=current_user.uid,
attr_id=attr.id,
first=True,
to_dict=False)
if existed is None:
PreferenceShowAttributes.create(type_id=type_id,
uid=current_user.uid,
attr_id=attr.id,
order=order,
is_fixed=is_fixed)
else:
existed.update(order=order, is_fixed=is_fixed)
attr_dict = {int(i[0]) if isinstance(i, list) else int(i): j for i, j in attr_order}
for i in existed_all:
if i.attr_id not in attr_dict:
i.soft_delete()
@staticmethod
def get_tree_view():
res = PreferenceTreeView.get_by(uid=current_user.uid, to_dict=True)
for item in res:
if item["levels"]:
ci_type = CITypeCache.get(item['type_id']).to_dict()
attr_filter = CIFilterPermsCRUD.get_attr_filter(ci_type['id'])
ci_type.pop('id', None)
ci_type.pop('created_at', None)
ci_type.pop('updated_at', None)
item.update(ci_type)
_levels = []
for i in item["levels"]:
attr = AttributeCache.get(i)
if attr and (not attr_filter or attr.name in attr_filter):
_levels.append(attr.to_dict())
item.update(dict(levels=_levels))
return res
@staticmethod
def create_or_update_tree_view(type_id, levels):
attrs = CITypeAttributesCache.get(type_id)
for idx, i in enumerate(levels):
for attr in attrs:
attr = AttributeCache.get(attr.attr_id)
if i == attr.id or i == attr.name or i == attr.alias:
levels[idx] = attr.id
existed = PreferenceTreeView.get_by(uid=current_user.uid, type_id=type_id, to_dict=False, first=True)
if existed is not None:
if not levels:
existed.soft_delete()
return existed
return existed.update(levels=levels)
elif levels:
return PreferenceTreeView.create(levels=levels, type_id=type_id, uid=current_user.uid)
@staticmethod
def get_relation_view():
_views = PreferenceRelationView.get_by(to_dict=True)
views = []
if current_app.config.get("USE_ACL"):
for i in _views:
try:
if i.get('is_public') or ACLManager().has_permission(i.get('name'),
ResourceTypeEnum.RELATION_VIEW,
PermEnum.READ):
views.append(i)
except AbortException:
pass
else:
views = _views
view2cr_ids = dict()
result = dict()
name2id = list()
for view in views:
view2cr_ids.setdefault(view['name'], []).extend(view['cr_ids'])
name2id.append([view['name'], view['id']])
id2type = dict()
for view_name in view2cr_ids:
for i in view2cr_ids[view_name]:
id2type[i['parent_id']] = None
id2type[i['child_id']] = None
topo = {i['child_id']: {i['parent_id']} for i in view2cr_ids[view_name]}
leaf = list(set(toposort.toposort_flatten(topo)) - set([j for i in topo.values() for j in i]))
leaf2show_types = {i: [t['child_id'] for t in CITypeRelation.get_by(parent_id=i)] for i in leaf}
node2show_types = copy.deepcopy(leaf2show_types)
def _find_parent(_node_id):
parents = topo.get(_node_id, {})
for parent in parents:
node2show_types.setdefault(parent, []).extend(node2show_types.get(_node_id, []))
_find_parent(parent)
if not parents:
return
for l in leaf:
_find_parent(l)
for node_id in node2show_types:
node2show_types[node_id] = [CITypeCache.get(i).to_dict() for i in set(node2show_types[node_id])]
result[view_name] = dict(topo=list(map(list, toposort.toposort(topo))),
topo_flatten=list(toposort.toposort_flatten(topo)),
leaf=leaf,
leaf2show_types=leaf2show_types,
node2show_types=node2show_types,
show_types=[CITypeCache.get(j).to_dict()
for i in leaf2show_types.values() for j in i])
for type_id in id2type:
id2type[type_id] = CITypeCache.get(type_id).to_dict()
return result, id2type, sorted(name2id, key=lambda x: x[1])
@classmethod
def create_or_update_relation_view(cls, name, cr_ids, is_public=False):
if not cr_ids:
return abort(400, ErrFormat.preference_relation_view_node_required)
existed = PreferenceRelationView.get_by(name=name, to_dict=False, first=True)
current_app.logger.debug(existed)
if existed is None:
PreferenceRelationView.create(name=name, cr_ids=cr_ids, uid=current_user.uid, is_public=is_public)
if current_app.config.get("USE_ACL"):
ACLManager().add_resource(name, ResourceTypeEnum.RELATION_VIEW)
ACLManager().grant_resource_to_role(name,
RoleEnum.CMDB_READ_ALL,
ResourceTypeEnum.RELATION_VIEW,
permissions=[PermEnum.READ])
return cls.get_relation_view()
@staticmethod
def delete_relation_view(name):
for existed in PreferenceRelationView.get_by(name=name, to_dict=False):
existed.soft_delete()
if current_app.config.get("USE_ACL"):
ACLManager().del_resource(name, ResourceTypeEnum.RELATION_VIEW)
return name
@staticmethod
def get_search_option(**kwargs):
query = PreferenceSearchOption.get_by(only_query=True)
query = query.filter(PreferenceSearchOption.uid == current_user.uid)
for k in kwargs:
if hasattr(PreferenceSearchOption, k) and kwargs[k]:
query = query.filter(getattr(PreferenceSearchOption, k) == kwargs[k])
return [i.to_dict() for i in query]
@staticmethod
def add_search_option(**kwargs):
kwargs['uid'] = current_user.uid
existed = PreferenceSearchOption.get_by(uid=current_user.uid,
name=kwargs.get('name'),
prv_id=kwargs.get('prv_id'),
ptv_id=kwargs.get('ptv_id'),
type_id=kwargs.get('type_id'),
)
if existed:
return abort(400, ErrFormat.preference_search_option_exists)
return PreferenceSearchOption.create(**kwargs)
@staticmethod
def update_search_option(_id, **kwargs):
existed = PreferenceSearchOption.get_by_id(_id) or abort(404, ErrFormat.preference_search_option_not_found)
if current_user.uid != existed.uid:
return abort(400, ErrFormat.no_permission2)
other = PreferenceSearchOption.get_by(uid=current_user.uid,
name=kwargs.get('name'),
prv_id=kwargs.get('prv_id'),
ptv_id=kwargs.get('ptv_id'),
type_id=kwargs.get('type_id'),
)
if other.id != _id:
return abort(400, ErrFormat.preference_search_option_exists)
return existed.update(**kwargs)
@staticmethod
def delete_search_option(_id):
existed = PreferenceSearchOption.get_by_id(_id) or abort(404, ErrFormat.preference_search_option_not_found)
if current_user.uid != existed.uid:
return abort(400, ErrFormat.no_permission2)
existed.soft_delete()
@staticmethod
def delete_by_type_id(type_id, uid):
for i in PreferenceShowAttributes.get_by(type_id=type_id, uid=uid, to_dict=False):
i.soft_delete()
for i in PreferenceTreeView.get_by(type_id=type_id, uid=uid, to_dict=False):
i.soft_delete()

View File

@ -0,0 +1,63 @@
# -*- coding:utf-8 -*-
QUERY_CIS_BY_VALUE_TABLE = """
SELECT attr.name AS attr_name,
attr.alias AS attr_alias,
attr.value_type,
attr.is_list,
c_cis.type_id,
{0}.ci_id,
{0}.attr_id,
{0}.value
FROM {0}
INNER JOIN c_cis ON {0}.ci_id=c_cis.id
AND {0}.`ci_id` IN ({1})
INNER JOIN c_attributes as attr ON attr.id = {0}.attr_id
"""
# {2}: value_table
QUERY_CIS_BY_IDS = """
SELECT A.ci_id,
A.type_id,
A.attr_id,
A.attr_name,
A.attr_alias,
A.value,
A.value_type,
A.is_list
FROM
({1}) AS A {0}
ORDER BY A.ci_id;
"""
FACET_QUERY1 = """
SELECT {0}.value,
count({0}.ci_id)
FROM {0}
INNER JOIN c_attributes AS attr ON attr.id={0}.attr_id
WHERE attr.name="{1}"
GROUP BY {0}.ci_id;
"""
FACET_QUERY = """
SELECT {0}.value,
count(distinct({0}.ci_id))
FROM {0}
INNER JOIN ({1}) AS F ON F.ci_id={0}.ci_id
WHERE {0}.attr_id={2:d}
GROUP BY {0}.value
"""
QUERY_CI_BY_ATTR_NAME = """
SELECT {0}.ci_id
FROM {0}
WHERE {0}.attr_id={1:d}
AND {0}.value {2}
"""
QUERY_CI_BY_TYPE = """
SELECT c_cis.id AS ci_id
FROM c_cis
WHERE c_cis.type_id in ({0})
"""

View File

@ -0,0 +1,44 @@
# -*- coding:utf-8 -*-
from flask import abort
from api.lib.cmdb.resp_format import ErrFormat
from api.models.cmdb import RelationType
class RelationTypeManager(object):
cls = RelationType
@staticmethod
def get_all():
return RelationType.get_by(to_dict=False)
@classmethod
def get_names(cls):
return [i.name for i in cls.get_all()]
@classmethod
def get_pairs(cls):
return [(i.id, i.name) for i in cls.get_all()]
@staticmethod
def add(name):
RelationType.get_by(name=name, first=True, to_dict=False) and abort(
400, ErrFormat.relation_type_exists.format(name))
return RelationType.create(name=name)
@staticmethod
def update(rel_id, name):
existed = RelationType.get_by_id(rel_id) or abort(
404, ErrFormat.relation_type_not_found.format("id={}".format(rel_id)))
return existed.update(name=name)
@staticmethod
def delete(rel_id):
existed = RelationType.get_by_id(rel_id) or abort(
404, ErrFormat.relation_type_not_found.format("id={}".format(rel_id)))
existed.soft_delete()

View File

@ -0,0 +1,97 @@
# -*- coding:utf-8 -*-
from api.lib.resp_format import CommonErrFormat
class ErrFormat(CommonErrFormat):
invalid_relation_type = "无效的关系类型: {}"
ci_type_not_found = "模型不存在!"
argument_attributes_must_be_list = "参数 attributes 类型必须是列表"
argument_file_not_found = "文件似乎并未上传"
attribute_not_found = "属性 {} 不存在!"
attribute_is_unique_id = "该属性是模型的唯一标识,不能被删除!"
attribute_is_ref_by_type = "该属性被模型 {} 引用, 不能删除!"
attribute_value_type_cannot_change = "属性的值类型不允许修改!"
attribute_list_value_cannot_change = "多值不被允许修改!"
attribute_index_cannot_change = "修改索引 非管理员不被允许!"
attribute_index_change_failed = "索引切换失败!"
invalid_choice_values = "预定义值的类型不对!"
attribute_name_duplicate = "重复的属性名 {}"
add_attribute_failed = "创建属性 {} 失败!"
update_attribute_failed = "修改属性 {} 失败!"
cannot_edit_attribute = "您没有权限修改该属性!"
cannot_delete_attribute = "目前只允许 属性创建人、管理员 删除属性!"
attribute_name_cannot_be_builtin = "属性字段名不能是内置字段: id, _id, ci_id, type, _type, ci_type"
attribute_choice_other_invalid = "预定义值: 其他模型请求参数不合法!"
ci_not_found = "CI {} 不存在"
unique_constraint = "多属性联合唯一校验不通过: {}"
unique_value_not_found = "模型的主键 {} 不存在!"
unique_key_required = "主键字段 {} 缺失"
ci_is_already_existed = "CI 已经存在!"
relation_constraint = "关系约束: {}, 校验失败 "
relation_not_found = "CI关系: {} 不存在"
ci_search_Parentheses_invalid = "搜索表达式里小括号前不支持: 或、非"
ci_type_not_found2 = "模型 {} 不存在"
ci_type_is_already_existed = "模型 {} 已经存在"
unique_key_not_define = "主键未定义或者已被删除"
only_owner_can_delete = "只有创建人才能删除它!"
ci_exists_and_cannot_delete_type = "因为CI已经存在不能删除模型"
ci_relation_view_exists_and_cannot_delete_type = "因为关系视图 {} 引用了该模型,不能删除模型"
ci_type_group_not_found = "模型分组 {} 不存在"
ci_type_group_exists = "模型分组 {} 已经存在"
ci_type_relation_not_found = "模型关系 {} 不存在"
ci_type_attribute_group_duplicate = "属性分组 {} 已存在"
ci_type_attribute_group_not_found = "属性分组 {} 不存在"
ci_type_group_attribute_not_found = "属性组<{0}> - 属性<{1}> 不存在"
unique_constraint_duplicate = "唯一约束已经存在!"
unique_constraint_invalid = "唯一约束的属性不能是 JSON 和 多值"
ci_type_trigger_duplicate = "重复的触发器"
ci_type_trigger_not_found = "触发器 {} 不存在"
record_not_found = "操作记录 {} 不存在"
cannot_delete_unique = "不能删除唯一标识"
cannot_delete_default_order_attr = "不能删除默认排序的属性"
preference_relation_view_node_required = "没有选择节点"
preference_search_option_not_found = "该搜索选项不存在!"
preference_search_option_exists = "该搜索选项命名重复!"
relation_type_exists = "关系类型 {} 已经存在"
relation_type_not_found = "关系类型 {} 不存在"
attribute_value_invalid = "无效的属性值: {}"
attribute_value_invalid2 = "{} 无效的值: {}"
not_in_choice_values = "{} 不在预定义值里"
attribute_value_unique_required = "属性 {} 的值必须是唯一的, 当前值 {} 已存在"
attribute_value_required = "属性 {} 值必须存在"
attribute_value_unknown_error = "新增或者修改属性值未知错误: {}"
custom_name_duplicate = "订制名重复"
limit_ci_type = "模型数超过限制: {}"
limit_ci = "CI数超过限制: {}"
adr_duplicate = "自动发现规则: {} 已经存在!"
adr_not_found = "自动发现规则: {} 不存在!"
adr_referenced = "该自动发现规则被模型引用, 不能删除!"
ad_duplicate = "自动发现规则的应用不能重复定义!"
ad_not_found = "您要修改的自动发现: {} 不存在!"
ad_not_unique_key = "属性字段没有包括唯一标识: {}"
adc_not_found = "自动发现的实例不存在!"
adt_not_found = "模型并未关联该自动发现!"
adt_secret_no_permission = "只有创建人才能修改Secret!"
cannot_delete_adt = "该规则已经有自动发现的实例, 不能被删除!"
adr_default_ref_once = "该默认的自动发现规则 已经被模型 {} 引用!"
adr_unique_key_required = "unique_key方法必须返回非空字符串!"
adr_plugin_attributes_list_required = "attributes方法必须返回的是list"
adr_plugin_attributes_list_no_empty = "attributes方法返回的list不能为空!"
adt_target_all_no_permission = "只有管理员才可以定义执行机器为: 所有节点!"
adt_target_expr_no_permission = "执行机器权限检查不通过: {}"
ci_filter_name_cannot_be_empty = "CI过滤授权 必须命名!"
ci_filter_perm_cannot_or_query = "CI过滤授权 暂时不支持 或 查询"
ci_filter_perm_attr_no_permission = "您没有属性 {} 的操作权限!"
ci_filter_perm_ci_no_permission = "您没有该CI的操作权限!"

View File

@ -0,0 +1,11 @@
# -*- coding:utf-8 -*-
__all__ = ['ci', 'ci_relation', 'SearchError']
class SearchError(Exception):
def __init__(self, v):
self.v = v
def __str__(self):
return self.v

View File

@ -0,0 +1,25 @@
# -*- coding:utf-8 -*-
__all__ = ['db', 'es', 'search']
from flask import current_app
from api.lib.cmdb.const import RetKey
from api.lib.cmdb.search.ci.db.search import Search as SearchFromDB
from api.lib.cmdb.search.ci.es.search import Search as SearchFromES
def search(query=None,
fl=None,
facet=None,
page=1,
ret_key=RetKey.NAME,
count=1,
sort=None,
excludes=None):
if current_app.config.get("USE_ES"):
s = SearchFromES(query, fl, facet, page, ret_key, count, sort)
else:
s = SearchFromDB(query, fl, facet, page, ret_key, count, sort, excludes=excludes)
return s

View File

@ -0,0 +1 @@
# -*- coding:utf-8 -*-

View File

@ -0,0 +1,107 @@
# -*- coding:utf-8 -*-
from __future__ import unicode_literals
QUERY_CIS_BY_VALUE_TABLE = """
SELECT attr.name AS attr_name,
attr.alias AS attr_alias,
attr.value_type,
attr.is_list,
c_cis.type_id,
{0}.ci_id,
{0}.attr_id,
{0}.value
FROM {0}
INNER JOIN c_cis ON {0}.ci_id=c_cis.id
AND {0}.`ci_id` IN ({1})
INNER JOIN c_attributes as attr ON attr.id = {0}.attr_id
"""
# {2}: value_table
QUERY_CIS_BY_IDS = """
SELECT A.ci_id,
A.type_id,
A.attr_id,
A.attr_name,
A.attr_alias,
A.value,
A.value_type,
A.is_list
FROM
({1}) AS A {0}
ORDER BY A.ci_id;
"""
FACET_QUERY1 = """
SELECT {0}.value,
count({0}.ci_id)
FROM {0}
INNER JOIN c_attributes AS attr ON attr.id={0}.attr_id
WHERE attr.name="{1}"
GROUP BY {0}.ci_id;
"""
FACET_QUERY = """
SELECT {0}.value,
count({0}.ci_id)
FROM {0}
INNER JOIN ({1}) AS F ON F.ci_id={0}.ci_id
WHERE {0}.attr_id={2:d}
GROUP BY {0}.value
"""
QUERY_CI_BY_ATTR_NAME = """
SELECT {0}.ci_id
FROM {0}
WHERE {0}.attr_id={1:d}
AND {0}.value {2}
"""
QUERY_CI_BY_ID = """
SELECT c_cis.id as ci_id
FROM c_cis
WHERE c_cis.id={}
"""
QUERY_CI_BY_TYPE = """
SELECT c_cis.id AS ci_id
FROM c_cis
WHERE c_cis.type_id in ({0})
"""
QUERY_UNION_CI_ATTRIBUTE_IS_NULL = """
SELECT *
FROM (
SELECT c_cis.id AS ci_id
FROM c_cis
WHERE c_cis.type_id IN ({0})
) {3}
LEFT JOIN (
SELECT {1}.ci_id
FROM {1}
WHERE {1}.attr_id = {2}
AND {1}.value LIKE "%"
) {4} USING (ci_id)
WHERE {4}.ci_id IS NULL
"""
QUERY_CI_BY_NO_ATTR = """
SELECT *
FROM
(SELECT c_value_index_texts.ci_id
FROM c_value_index_texts
WHERE c_value_index_texts.value LIKE "{0}"
UNION
SELECT c_value_index_integers.ci_id
FROM c_value_index_integers
WHERE c_value_index_integers.value LIKE "{0}"
UNION
SELECT c_value_index_floats.ci_id
FROM c_value_index_floats
WHERE c_value_index_floats.value LIKE "{0}"
UNION
SELECT c_value_index_datetime.ci_id
FROM c_value_index_datetime
WHERE c_value_index_datetime.value LIKE "{0}") AS {1}
GROUP BY {1}.ci_id
"""

View File

@ -0,0 +1,569 @@
# -*- coding:utf-8 -*-
from __future__ import unicode_literals
import copy
import time
from flask import current_app
from flask_login import current_user
from jinja2 import Template
from api.extensions import db
from api.lib.cmdb.cache import AttributeCache
from api.lib.cmdb.cache import CITypeCache
from api.lib.cmdb.ci import CIManager
from api.lib.cmdb.const import PermEnum
from api.lib.cmdb.const import ResourceTypeEnum
from api.lib.cmdb.const import RetKey
from api.lib.cmdb.const import ValueTypeEnum
from api.lib.cmdb.perms import CIFilterPermsCRUD
from api.lib.cmdb.resp_format import ErrFormat
from api.lib.cmdb.search import SearchError
from api.lib.cmdb.search.ci.db.query_sql import FACET_QUERY
from api.lib.cmdb.search.ci.db.query_sql import QUERY_CI_BY_ATTR_NAME
from api.lib.cmdb.search.ci.db.query_sql import QUERY_CI_BY_ID
from api.lib.cmdb.search.ci.db.query_sql import QUERY_CI_BY_NO_ATTR
from api.lib.cmdb.search.ci.db.query_sql import QUERY_CI_BY_TYPE
from api.lib.cmdb.search.ci.db.query_sql import QUERY_UNION_CI_ATTRIBUTE_IS_NULL
from api.lib.cmdb.utils import TableMap
from api.lib.perm.acl.acl import ACLManager
from api.lib.perm.acl.acl import is_app_admin
from api.lib.utils import handle_arg_list
class Search(object):
def __init__(self, query=None,
fl=None,
facet_field=None,
page=1,
ret_key=RetKey.NAME,
count=1,
sort=None,
ci_ids=None,
excludes=None):
self.orig_query = query
self.fl = fl or []
self.excludes = excludes or []
self.facet_field = facet_field
self.page = page
self.ret_key = ret_key
self.count = count
self.sort = sort
self.ci_ids = ci_ids or []
self.query_sql = ""
self.type_id_list = []
self.only_type_query = False
self.valid_type_names = []
self.type2filter_perms = dict()
@staticmethod
def _operator_proc(key):
operator = "&"
if key.startswith("+"):
key = key[1:].strip()
elif key.startswith("-~"):
operator = "|~"
key = key[2:].strip()
elif key.startswith("-"):
operator = "|"
key = key[1:].strip()
elif key.startswith("~"):
operator = "~"
key = key[1:].strip()
return operator, key
def _attr_name_proc(self, key):
operator, key = self._operator_proc(key)
if key in ('ci_type', 'type', '_type'):
return '_type', ValueTypeEnum.TEXT, operator, None
if key in ('id', 'ci_id', '_id'):
return '_id', ValueTypeEnum.TEXT, operator, None
attr = AttributeCache.get(key)
if attr:
return attr.name, attr.value_type, operator, attr
else:
raise SearchError(ErrFormat.attribute_not_found.format(key))
def _type_query_handler(self, v, queries):
new_v = v[1:-1].split(";") if v.startswith("(") and v.endswith(")") else [v]
for _v in new_v:
ci_type = CITypeCache.get(_v)
if len(new_v) == 1 and not self.sort and ci_type and ci_type.default_order_attr:
self.sort = ci_type.default_order_attr
if ci_type is not None:
if self.valid_type_names == "ALL" or ci_type.name in self.valid_type_names:
self.type_id_list.append(str(ci_type.id))
if ci_type.id in self.type2filter_perms:
ci_filter = self.type2filter_perms[ci_type.id].get('ci_filter')
if ci_filter:
sub = []
ci_filter = Template(ci_filter).render(user=current_user)
for i in ci_filter.split(','):
if i.startswith("~") and not sub:
queries.append(i)
else:
sub.append(i)
if sub:
queries.append(dict(operator="&", queries=sub))
if self.type2filter_perms[ci_type.id].get('attr_filter'):
if not self.fl:
self.fl = set(self.type2filter_perms[ci_type.id]['attr_filter'])
else:
self.fl = set(self.fl) & set(self.type2filter_perms[ci_type.id]['attr_filter'])
else:
raise SearchError(ErrFormat.no_permission.format(ci_type.alias, PermEnum.READ))
else:
raise SearchError(ErrFormat.ci_type_not_found2.format(_v))
if self.type_id_list:
type_ids = ",".join(self.type_id_list)
_query_sql = QUERY_CI_BY_TYPE.format(type_ids)
if self.only_type_query:
return _query_sql
else:
return ""
return ""
@staticmethod
def _id_query_handler(v):
return QUERY_CI_BY_ID.format(v)
@staticmethod
def _in_query_handler(attr, v, is_not):
new_v = v[1:-1].split(";")
if attr.value_type == ValueTypeEnum.DATE:
new_v = ["{} 00:00:00".format(i) for i in new_v if len(i) == 10]
table_name = TableMap(attr=attr).table_name
in_query = " OR {0}.value ".format(table_name).join(['{0} "{1}"'.format(
"NOT LIKE" if is_not else "LIKE",
_v.replace("*", "%")) for _v in new_v])
_query_sql = QUERY_CI_BY_ATTR_NAME.format(table_name, attr.id, in_query)
return _query_sql
@staticmethod
def _range_query_handler(attr, v, is_not):
start, end = [x.strip() for x in v[1:-1].split("_TO_")]
if attr.value_type == ValueTypeEnum.DATE:
start = "{} 00:00:00".format(start) if len(start) == 10 else start
end = "{} 00:00:00".format(end) if len(end) == 10 else end
table_name = TableMap(attr=attr).table_name
range_query = "{0} '{1}' AND '{2}'".format(
"NOT BETWEEN" if is_not else "BETWEEN",
start.replace("*", "%"), end.replace("*", "%"))
_query_sql = QUERY_CI_BY_ATTR_NAME.format(table_name, attr.id, range_query)
return _query_sql
@staticmethod
def _comparison_query_handler(attr, v):
table_name = TableMap(attr=attr).table_name
if v.startswith(">=") or v.startswith("<="):
if attr.value_type == ValueTypeEnum.DATE and len(v[2:]) == 10:
v = "{} 00:00:00".format(v)
comparison_query = "{0} '{1}'".format(v[:2], v[2:].replace("*", "%"))
else:
if attr.value_type == ValueTypeEnum.DATE and len(v[1:]) == 10:
v = "{} 00:00:00".format(v)
comparison_query = "{0} '{1}'".format(v[0], v[1:].replace("*", "%"))
_query_sql = QUERY_CI_BY_ATTR_NAME.format(table_name, attr.id, comparison_query)
return _query_sql
@staticmethod
def __sort_by(field):
field = field or ""
sort_type = "ASC"
if field.startswith("+"):
field = field[1:]
elif field.startswith("-"):
field = field[1:]
sort_type = "DESC"
return field, sort_type
def __sort_by_id(self, sort_type, query_sql):
ret_sql = "SELECT SQL_CALC_FOUND_ROWS DISTINCT B.ci_id FROM ({0}) AS B {1}"
if self.only_type_query:
return ret_sql.format(query_sql, "ORDER BY B.ci_id {1} LIMIT {0:d}, {2};".format(
(self.page - 1) * self.count, sort_type, self.count))
elif self.type_id_list:
self.query_sql = "SELECT B.ci_id FROM ({0}) AS B {1}".format(
query_sql,
"INNER JOIN c_cis on c_cis.id=B.ci_id WHERE c_cis.type_id IN ({0}) ".format(
",".join(self.type_id_list)))
return ret_sql.format(
query_sql,
"INNER JOIN c_cis on c_cis.id=B.ci_id WHERE c_cis.type_id IN ({3}) "
"ORDER BY B.ci_id {1} LIMIT {0:d}, {2};".format(
(self.page - 1) * self.count, sort_type, self.count, ",".join(self.type_id_list)))
else:
self.query_sql = "SELECT B.ci_id FROM ({0}) AS B {1}".format(
query_sql,
"INNER JOIN c_cis on c_cis.id=B.ci_id ")
return ret_sql.format(
query_sql,
"INNER JOIN c_cis on c_cis.id=B.ci_id "
"ORDER BY B.ci_id {1} LIMIT {0:d}, {2};".format((self.page - 1) * self.count, sort_type, self.count))
def __sort_by_type(self, sort_type, query_sql):
ret_sql = "SELECT SQL_CALC_FOUND_ROWS DISTINCT B.ci_id FROM ({0}) AS B {1}"
if self.type_id_list:
self.query_sql = "SELECT B.ci_id FROM ({0}) AS B {1}".format(
query_sql,
"INNER JOIN c_cis on c_cis.id=B.ci_id WHERE c_cis.type_id IN ({0}) ".format(
",".join(self.type_id_list)))
return ret_sql.format(
query_sql,
"INNER JOIN c_cis on c_cis.id=B.ci_id WHERE c_cis.type_id IN ({3}) "
"ORDER BY c_cis.type_id {1} LIMIT {0:d}, {2};".format(
(self.page - 1) * self.count, sort_type, self.count, ",".join(self.type_id_list)))
else:
self.query_sql = "SELECT B.ci_id FROM ({0}) AS B {1}".format(
query_sql,
"INNER JOIN c_cis on c_cis.id=B.ci_id ")
return ret_sql.format(
query_sql,
"INNER JOIN c_cis on c_cis.id=B.ci_id "
"ORDER BY c_cis.type_id {1} LIMIT {0:d}, {2};".format(
(self.page - 1) * self.count, sort_type, self.count))
def __sort_by_field(self, field, sort_type, query_sql):
attr = AttributeCache.get(field)
attr_id = attr.id
table_name = TableMap(attr=attr).table_name
_v_query_sql = """SELECT {0}.ci_id, {1}.value
FROM ({2}) AS {0} INNER JOIN {1} ON {1}.ci_id = {0}.ci_id
WHERE {1}.attr_id = {3}""".format("ALIAS", table_name, query_sql, attr_id)
new_table = _v_query_sql
if self.only_type_query or not self.type_id_list:
return ("SELECT SQL_CALC_FOUND_ROWS DISTINCT C.ci_id FROM ({0}) AS C ORDER BY C.value {2} "
"LIMIT {1:d}, {3};".format(new_table, (self.page - 1) * self.count, sort_type, self.count))
elif self.type_id_list:
self.query_sql = """SELECT C.ci_id
FROM ({0}) AS C
INNER JOIN c_cis on c_cis.id=C.ci_id
WHERE c_cis.type_id IN ({1})""".format(new_table, ",".join(self.type_id_list))
return """SELECT SQL_CALC_FOUND_ROWS DISTINCT C.ci_id
FROM ({0}) AS C
INNER JOIN c_cis on c_cis.id=C.ci_id
WHERE c_cis.type_id IN ({4})
ORDER BY C.value {2}
LIMIT {1:d}, {3};""".format(new_table,
(self.page - 1) * self.count,
sort_type, self.count,
",".join(self.type_id_list))
def _sort_query_handler(self, field, query_sql):
field, sort_type = self.__sort_by(field)
if field in ("_id", "ci_id") or not field:
return self.__sort_by_id(sort_type, query_sql)
elif field in ("_type", "ci_type"):
return self.__sort_by_type(sort_type, query_sql)
else:
return self.__sort_by_field(field, sort_type, query_sql)
@staticmethod
def _wrap_sql(operator, alias, _query_sql, query_sql):
if operator == "&":
query_sql = """SELECT * FROM ({0}) as {1}
INNER JOIN ({2}) as {3} USING(ci_id)""".format(query_sql, alias, _query_sql, alias + "A")
elif operator == "|" or operator == "|~":
query_sql = "SELECT * FROM ({0}) as {1} UNION ALL ({2})".format(query_sql, alias, _query_sql)
elif operator == "~":
query_sql = """SELECT * FROM ({0}) as {1} LEFT JOIN ({2}) as {3} USING(ci_id)
WHERE {3}.ci_id is NULL""".format(query_sql, alias, _query_sql, alias + "A")
return query_sql
def _execute_sql(self, query_sql):
v_query_sql = self._sort_query_handler(self.sort, query_sql)
start = time.time()
execute = db.session.execute
# current_app.logger.debug(v_query_sql)
res = execute(v_query_sql).fetchall()
end_time = time.time()
current_app.logger.debug("query ci ids time is: {0}".format(end_time - start))
numfound = execute("SELECT FOUND_ROWS();").fetchall()[0][0]
current_app.logger.debug("statistics ci ids time is: {0}".format(time.time() - end_time))
return numfound, res
def __get_types_has_read(self):
"""
:return: _type:(type1;type2)
"""
acl = ACLManager('cmdb')
res = acl.get_resources(ResourceTypeEnum.CI)
self.valid_type_names = {i['name'] for i in res if PermEnum.READ in i['permissions']}
res2 = acl.get_resources(ResourceTypeEnum.CI_FILTER)
if res2:
self.type2filter_perms = CIFilterPermsCRUD().get_by_ids(list(map(int, [i['name'] for i in res2])))
return "_type:({})".format(";".join(self.valid_type_names))
def __confirm_type_first(self, queries):
has_type = False
result = []
sub = {}
id_query = None
for q in queries:
if q.startswith("_type"):
has_type = True
result.insert(0, q)
if len(queries) == 1 or queries[1].startswith("-") or queries[1].startswith("~"):
self.only_type_query = True
elif q.startswith("_id") and len(q.split(':')) == 2:
id_query = int(q.split(":")[1]) if q.split(":")[1].isdigit() else None
result.append(q)
elif q.startswith("(") or q[1:].startswith("(") or q[2:].startswith("("):
if not q.startswith("("):
raise SearchError(ErrFormat.ci_search_Parentheses_invalid)
operator, q = self._operator_proc(q)
if q.endswith(")"):
result.append(dict(operator=operator, queries=[q[1:-1]]))
sub = dict(operator=operator, queries=[q[1:]])
elif q.endswith(")") and sub:
sub['queries'].append(q[:-1])
result.append(copy.deepcopy(sub))
sub = {}
elif sub:
sub['queries'].append(q)
else:
result.append(q)
_is_app_admin = is_app_admin('cmdb') or current_user.username == "worker"
if result and not has_type and not _is_app_admin:
type_q = self.__get_types_has_read()
if id_query:
ci = CIManager.get_by_id(id_query)
if not ci:
raise SearchError(ErrFormat.ci_not_found.format(id_query))
result.insert(0, "_type:{}".format(ci.type_id))
else:
result.insert(0, type_q)
elif _is_app_admin:
self.valid_type_names = "ALL"
else:
self.__get_types_has_read()
current_app.logger.warning(result)
return result
def __query_by_attr(self, q, queries, alias):
k = q.split(":")[0].strip()
v = "\:".join(q.split(":")[1:]).strip()
v = v.replace("'", "\\'")
v = v.replace('"', '\\"')
field, field_type, operator, attr = self._attr_name_proc(k)
if field == "_type":
_query_sql = self._type_query_handler(v, queries)
elif field == "_id":
_query_sql = self._id_query_handler(v)
elif field:
if attr is None:
raise SearchError(ErrFormat.attribute_not_found.format(field))
is_not = True if operator == "|~" else False
if field_type == ValueTypeEnum.DATE and len(v) == 10:
v = "{} 00:00:00".format(v)
# in query
if v.startswith("(") and v.endswith(")"):
_query_sql = self._in_query_handler(attr, v, is_not)
# range query
elif v.startswith("[") and v.endswith("]") and "_TO_" in v:
_query_sql = self._range_query_handler(attr, v, is_not)
# comparison query
elif v.startswith(">=") or v.startswith("<=") or v.startswith(">") or v.startswith("<"):
_query_sql = self._comparison_query_handler(attr, v)
else:
table_name = TableMap(attr=attr).table_name
if is_not and v == "*" and self.type_id_list: # special handle
_query_sql = QUERY_UNION_CI_ATTRIBUTE_IS_NULL.format(
",".join(self.type_id_list),
table_name,
attr.id,
alias,
alias + 'A'
)
alias += "AA"
else:
_query_sql = QUERY_CI_BY_ATTR_NAME.format(
table_name,
attr.id,
'{0} "{1}"'.format("NOT LIKE" if is_not else "LIKE", v.replace("*", "%")))
else:
raise SearchError(ErrFormat.argument_invalid.format("q"))
return alias, _query_sql, operator
def __query_build_by_field(self, queries, is_first=True, only_type_query_special=True, alias='A', operator='&'):
query_sql = ""
for q in queries:
_query_sql = ""
if isinstance(q, dict):
alias, _query_sql, operator = self.__query_build_by_field(q['queries'], True, True, alias)
current_app.logger.info(_query_sql)
current_app.logger.info((operator, is_first, alias))
operator = q['operator']
elif ":" in q and not q.startswith("*"):
alias, _query_sql, operator = self.__query_by_attr(q, queries, alias)
elif q == "*":
continue
elif q:
q = q.replace("'", "\\'")
q = q.replace('"', '\\"')
q = q.replace("*", "%").replace('\\n', '%')
_query_sql = QUERY_CI_BY_NO_ATTR.format(q, alias)
if is_first and _query_sql and not self.only_type_query:
query_sql = "SELECT * FROM ({0}) AS {1}".format(_query_sql, alias)
is_first = False
alias += "A"
elif self.only_type_query and only_type_query_special:
is_first = False
only_type_query_special = False
query_sql = _query_sql
elif _query_sql:
query_sql = self._wrap_sql(operator, alias, _query_sql, query_sql)
alias += "AA"
return alias, query_sql, operator
def _filter_ids(self, query_sql):
if self.ci_ids:
return "SELECT * FROM ({0}) AS IN_QUERY WHERE IN_QUERY.ci_id IN ({1})".format(
query_sql, ",".join(list(map(str, self.ci_ids))))
return query_sql
@staticmethod
def _extra_handle_query_expr(args): # \, or ,
result = []
if args:
result.append(args[0])
for arg in args[1:]:
if result[-1].endswith('\\'):
result[-1] = ",".join([result[-1].rstrip('\\'), arg])
# elif ":" not in arg:
# result[-1] = ",".join([result[-1], arg])
else:
result.append(arg)
return result
def _query_build_raw(self):
queries = handle_arg_list(self.orig_query)
queries = self._extra_handle_query_expr(queries)
queries = self.__confirm_type_first(queries)
current_app.logger.debug(queries)
_, query_sql, _ = self.__query_build_by_field(queries)
s = time.time()
if query_sql:
query_sql = self._filter_ids(query_sql)
self.query_sql = query_sql
# current_app.logger.debug(query_sql)
numfound, res = self._execute_sql(query_sql)
current_app.logger.debug("query ci ids is: {0}".format(time.time() - s))
return numfound, [_res[0] for _res in res]
return 0, []
def _facet_build(self):
facet = {}
for f in self.facet_field:
k, field_type, _, attr = self._attr_name_proc(f)
if k:
table_name = TableMap(attr=attr).table_name
query_sql = FACET_QUERY.format(table_name, self.query_sql, attr.id)
# current_app.logger.warning(query_sql)
result = db.session.execute(query_sql).fetchall()
facet[k] = result
facet_result = dict()
for k, v in facet.items():
if not k.startswith('_'):
a = getattr(AttributeCache.get(k), self.ret_key)
facet_result[a] = [(f[0], f[1], a) for f in v]
return facet_result
def _fl_build(self):
_fl = list()
for f in self.fl:
k, _, _, _ = self._attr_name_proc(f)
if k:
_fl.append(k)
return _fl
def search(self):
numfound, ci_ids = self._query_build_raw()
ci_ids = list(map(str, ci_ids))
_fl = self._fl_build()
if self.facet_field and numfound:
facet = self._facet_build()
else:
facet = dict()
response, counter = [], {}
if ci_ids:
response = CIManager.get_cis_by_ids(ci_ids, ret_key=self.ret_key, fields=_fl, excludes=self.excludes)
for res in response:
ci_type = res.get("ci_type")
if ci_type not in counter.keys():
counter[ci_type] = 0
counter[ci_type] += 1
total = len(response)
return response, counter, total, self.page, numfound, facet

View File

@ -0,0 +1 @@
# -*- coding:utf-8 -*-

View File

@ -0,0 +1,333 @@
# -*- coding:utf-8 -*-
from __future__ import unicode_literals
import six
from flask import current_app
from api.extensions import es
from api.lib.cmdb.cache import AttributeCache
from api.lib.cmdb.const import RetKey
from api.lib.cmdb.const import ValueTypeEnum
from api.lib.cmdb.resp_format import ErrFormat
from api.lib.cmdb.search import SearchError
from api.lib.utils import handle_arg_list
class Search(object):
def __init__(self, query=None,
fl=None,
facet_field=None,
page=1,
ret_key=RetKey.NAME,
count=1,
sort=None,
ci_ids=None,
excludes=None):
self.orig_query = query
self.fl = fl or []
self.excludes = excludes or []
self.facet_field = facet_field
self.page = page
self.ret_key = ret_key
self.count = count or current_app.config.get("DEFAULT_PAGE_COUNT")
self.sort = sort or "ci_id"
self.ci_ids = ci_ids or []
self.query = dict(query=dict(bool=dict(should=[], must=[], must_not=[])))
@staticmethod
def _operator_proc(key):
operator = "&"
if key.startswith("+"):
key = key[1:].strip()
elif key.startswith("-~"):
operator = "|~"
key = key[2:].strip()
elif key.startswith("-"):
operator = "|"
key = key[1:].strip()
elif key.startswith("~"):
operator = "~"
key = key[1:].strip()
return operator, key
def _operator2query(self, operator):
if operator == "&":
return self.query['query']['bool']['must']
elif operator == "|":
return self.query['query']['bool']['should']
elif operator == "|~":
return self.query['query']['bool']['should']
else:
return self.query['query']['bool']['must_not']
def _attr_name_proc(self, key):
operator, key = self._operator_proc(key)
if key in ('ci_type', 'type', '_type'):
return 'ci_type', ValueTypeEnum.TEXT, operator
if key in ('id', 'ci_id', '_id'):
return 'ci_id', ValueTypeEnum.TEXT, operator
attr = AttributeCache.get(key)
if attr:
return attr.name, attr.value_type, operator
else:
raise SearchError(ErrFormat.attribute_not_found.format(key))
def _in_query_handle(self, attr, v, is_not):
terms = v[1:-1].split(";")
operator = "|"
if attr in ('_type', 'ci_type', 'type_id') and terms and terms[0].isdigit():
attr = "type_id"
terms = map(int, terms)
current_app.logger.warning(terms)
for term in terms:
if is_not:
self._operator2query(operator).append({
"bool": {
"must_not": [
{
"term": {
attr: term
}
}
]
}
})
else:
self._operator2query(operator).append({
"term": {
attr: term
}
})
def _filter_ids(self):
if self.ci_ids:
self.query['query']['bool'].update(dict(filter=dict(terms=dict(ci_id=self.ci_ids))))
@staticmethod
def _digit(s):
if s.isdigit():
return int(float(s))
return s
def _range_query_handle(self, attr, v, operator, is_not):
left, right = v.split("_TO_")
left, right = left.strip()[1:], right.strip()[:-1]
if is_not:
self._operator2query(operator).append({
"bool": {
"must_not": [
{
"range": {
attr: {
"lte": self._digit(right),
"gte": self._digit(left),
"boost": 2.0
}
}
}
]
}
})
else:
self._operator2query(operator).append({
"range": {
attr: {
"lte": self._digit(right),
"gte": self._digit(left),
"boost": 2.0
}
}
})
def _comparison_query_handle(self, attr, v, operator):
if v.startswith(">="):
_query = dict(gte=self._digit(v[2:]), boost=2.0)
elif v.startswith("<="):
_query = dict(lte=self._digit(v[2:]), boost=2.0)
elif v.startswith(">"):
_query = dict(gt=self._digit(v[1:]), boost=2.0)
elif v.startswith("<"):
_query = dict(lt=self._digit(v[1:]), boost=2.0)
else:
return
self._operator2query(operator).append({
"range": {
attr: _query
}
})
def _match_query_handle(self, attr, v, operator, is_not):
if "*" in v:
if is_not:
self._operator2query(operator).append({
"bool": {
"must_not": [
{
"wildcard": {
attr: v.lower() if isinstance(v, six.string_types) else v
}
}
]
}
})
else:
self._operator2query(operator).append({
"wildcard": {
attr: v.lower() if isinstance(v, six.string_types) else v
}
})
else:
if attr == "ci_type" and v.isdigit():
attr = "type_id"
if is_not:
self._operator2query(operator).append({
"bool": {
"must_not": [
{
"term": {
attr: v.lower() if isinstance(v, six.string_types) else v
}
}
]
}
})
else:
self._operator2query(operator).append({
"term": {
attr: v.lower() if isinstance(v, six.string_types) else v
}
})
def __query_build_by_field(self, queries):
for q in queries:
if ":" in q:
k = q.split(":")[0].strip()
v = ":".join(q.split(":")[1:]).strip()
field_name, field_type, operator = self._attr_name_proc(k)
if field_name:
is_not = True if operator == "|~" else False
# in query
if v.startswith("(") and v.endswith(")"):
self._in_query_handle(field_name, v, is_not)
# range query
elif v.startswith("[") and v.endswith("]") and "_TO_" in v:
self._range_query_handle(field_name, v, operator, is_not)
# comparison query
elif v.startswith(">=") or v.startswith("<=") or v.startswith(">") or v.startswith("<"):
self._comparison_query_handle(field_name, v, operator)
else:
self._match_query_handle(field_name, v, operator, is_not)
else:
raise SearchError(ErrFormat.argument_invalid.format("q"))
elif q:
raise SearchError(ErrFormat.argument_invalid.format("q"))
def _query_build_raw(self):
queries = handle_arg_list(self.orig_query)
current_app.logger.debug(queries)
self.__query_build_by_field(queries)
self._paginate_build()
filter_path = self._fl_build()
self._sort_build()
self._facet_build()
self._filter_ids()
return es.read(self.query, filter_path=filter_path)
def _facet_build(self):
aggregations = dict(aggs={})
for field in self.facet_field:
attr = AttributeCache.get(field)
if not attr:
raise SearchError(ErrFormat.attribute_not_found(field))
aggregations['aggs'].update({
field: {
"terms": {
"field": "{0}.keyword".format(field)
if attr.value_type not in (ValueTypeEnum.INT, ValueTypeEnum.FLOAT) else field
}
}
})
if aggregations['aggs']:
self.query.update(aggregations)
def _sort_build(self):
fields = list(filter(lambda x: x != "", (self.sort or "").split(",")))
sorts = []
for field in fields:
sort_type = "asc"
if field.startswith("+"):
field = field[1:]
elif field.startswith("-"):
field = field[1:]
sort_type = "desc"
else:
field = field
if field == "ci_id":
sorts.append({field: {"order": sort_type}})
continue
attr = AttributeCache.get(field)
if not attr:
raise SearchError(ErrFormat.attribute_not_found.format(field))
sort_by = ("{0}.keyword".format(field)
if attr.value_type not in (ValueTypeEnum.INT, ValueTypeEnum.FLOAT) else field)
sorts.append({sort_by: {"order": sort_type}})
self.query.update(dict(sort=sorts))
def _paginate_build(self):
self.query.update({"from": (self.page - 1) * self.count,
"size": self.count})
def _fl_build(self):
return ['hits.hits._source.{0}'.format(i) for i in self.fl]
def search(self):
try:
numfound, cis, facet = self._query_build_raw()
except Exception as e:
current_app.logger.error(str(e))
raise SearchError(ErrFormat.unknown_search_error)
total = len(cis)
counter = dict()
for ci in cis:
ci_type = ci.get("ci_type")
if ci_type not in counter.keys():
counter[ci_type] = 0
counter[ci_type] += 1
facet_ = dict()
for k in facet:
facet_[k] = [[i['key'], i['doc_count'], k] for i in facet[k]["buckets"]]
return cis, counter, total, self.page, numfound, facet_

View File

@ -0,0 +1 @@
# -*- coding:utf-8 -*-

View File

@ -0,0 +1,137 @@
# -*- coding:utf-8 -*-
import json
from collections import Counter
from flask import abort
from flask import current_app
from api.extensions import rd
from api.lib.cmdb.ci import CIRelationManager
from api.lib.cmdb.ci_type import CITypeRelationManager
from api.lib.cmdb.const import REDIS_PREFIX_CI_RELATION
from api.lib.cmdb.resp_format import ErrFormat
from api.lib.cmdb.search.ci.db.search import Search as SearchFromDB
from api.lib.cmdb.search.ci.es.search import Search as SearchFromES
from api.models.cmdb import CI
class Search(object):
def __init__(self, root_id,
level=None,
query=None,
fl=None,
facet_field=None,
page=1,
count=None,
sort=None,
reverse=False):
self.orig_query = query
self.fl = fl
self.facet_field = facet_field
self.page = page
self.count = count or current_app.config.get("DEFAULT_PAGE_COUNT")
self.sort = sort or ("ci_id" if current_app.config.get("USE_ES") else None)
self.root_id = root_id
self.level = level
self.reverse = reverse
def _get_ids(self):
merge_ids = []
ids = [self.root_id] if not isinstance(self.root_id, list) else self.root_id
for level in range(1, sorted(self.level)[-1] + 1):
_tmp = list(map(lambda x: list(json.loads(x).keys()),
filter(lambda x: x is not None, rd.get(ids, REDIS_PREFIX_CI_RELATION) or [])))
ids = [j for i in _tmp for j in i]
if level in self.level:
merge_ids.extend(ids)
return merge_ids
def _get_reverse_ids(self):
merge_ids = []
ids = [self.root_id] if not isinstance(self.root_id, list) else self.root_id
for level in range(1, sorted(self.level)[-1] + 1):
ids = CIRelationManager.get_ancestor_ids(ids, 1)
if level in self.level:
merge_ids.extend(ids)
return merge_ids
def search(self):
ids = [self.root_id] if not isinstance(self.root_id, list) else self.root_id
cis = [CI.get_by_id(_id) or abort(404, ErrFormat.ci_not_found.format("id={}".format(_id))) for _id in ids]
merge_ids = self._get_ids() if not self.reverse else self._get_reverse_ids()
if not self.orig_query or ("_type:" not in self.orig_query
and "type_id:" not in self.orig_query
and "ci_type:" not in self.orig_query):
type_ids = []
for level in self.level:
for ci in cis:
if not self.reverse:
type_ids.extend(CITypeRelationManager.get_child_type_ids(ci.type_id, level))
else:
type_ids.extend(CITypeRelationManager.get_parent_type_ids(ci.type_id, level))
type_ids = list(set(type_ids))
if self.orig_query:
self.orig_query = "_type:({0}),{1}".format(";".join(list(map(str, type_ids))), self.orig_query)
else:
self.orig_query = "_type:({0})".format(";".join(list(map(str, type_ids))))
if not merge_ids:
# cis, counter, total, self.page, numfound, facet_
return [], {}, 0, self.page, 0, {}
if current_app.config.get("USE_ES"):
return SearchFromES(self.orig_query,
fl=self.fl,
facet_field=self.facet_field,
page=self.page,
count=self.count,
sort=self.sort,
ci_ids=merge_ids).search()
else:
return SearchFromDB(self.orig_query,
fl=self.fl,
facet_field=self.facet_field,
page=self.page,
count=self.count,
sort=self.sort,
ci_ids=merge_ids).search()
def statistics(self, type_ids):
_tmp = []
ids = [self.root_id] if not isinstance(self.root_id, list) else self.root_id
for l in range(0, int(self.level)):
if not l:
_tmp = list(map(lambda x: list(json.loads(x).items()),
[i or '{}' for i in rd.get(ids, REDIS_PREFIX_CI_RELATION) or []]))
else:
for idx, item in enumerate(_tmp):
if item:
if type_ids and l == self.level - 1:
__tmp = list(
map(lambda x: [(_id, type_id) for _id, type_id in json.loads(x).items()
if type_id in type_ids],
filter(lambda x: x is not None,
rd.get([i[0] for i in item], REDIS_PREFIX_CI_RELATION) or [])))
else:
__tmp = list(map(lambda x: list(json.loads(x).items()),
filter(lambda x: x is not None,
rd.get([i[0] for i in item], REDIS_PREFIX_CI_RELATION) or [])))
_tmp[idx] = [j for i in __tmp for j in i]
else:
_tmp[idx] = []
result = {str(_id): len(_tmp[idx]) for idx, _id in enumerate(ids)}
result.update(
detail={str(_id): dict(Counter([i[1] for i in _tmp[idx]]).items()) for idx, _id in enumerate(ids)})
return result

View File

@ -0,0 +1,132 @@
# -*- coding:utf-8 -*-
from __future__ import unicode_literals
import datetime
import json
import re
import six
import api.models.cmdb as model
from api.lib.cmdb.cache import AttributeCache
from api.lib.cmdb.const import ValueTypeEnum
TIME_RE = re.compile(r"^(20|21|22|23|[0-1]\d):[0-5]\d:[0-5]\d$")
def string2int(x):
return int(float(x))
def str2datetime(x):
try:
return datetime.datetime.strptime(x, "%Y-%m-%d")
except ValueError:
pass
return datetime.datetime.strptime(x, "%Y-%m-%d %H:%M:%S")
class ValueTypeMap(object):
deserialize = {
ValueTypeEnum.INT: string2int,
ValueTypeEnum.FLOAT: float,
ValueTypeEnum.TEXT: lambda x: x,
ValueTypeEnum.TIME: lambda x: TIME_RE.findall(x)[0],
ValueTypeEnum.DATETIME: str2datetime,
ValueTypeEnum.DATE: str2datetime,
ValueTypeEnum.JSON: lambda x: json.loads(x) if isinstance(x, six.string_types) and x else x,
}
serialize = {
ValueTypeEnum.INT: int,
ValueTypeEnum.FLOAT: float,
ValueTypeEnum.TEXT: lambda x: x if isinstance(x, six.string_types) else str(x),
ValueTypeEnum.TIME: lambda x: x if isinstance(x, six.string_types) else str(x),
ValueTypeEnum.DATE: lambda x: x.strftime("%Y-%m-%d"),
ValueTypeEnum.DATETIME: lambda x: x.strftime("%Y-%m-%d %H:%M:%S"),
ValueTypeEnum.JSON: lambda x: json.loads(x) if isinstance(x, six.string_types) and x else x,
}
serialize2 = {
ValueTypeEnum.INT: int,
ValueTypeEnum.FLOAT: float,
ValueTypeEnum.TEXT: lambda x: x.decode() if not isinstance(x, six.string_types) else x,
ValueTypeEnum.TIME: lambda x: x.decode() if not isinstance(x, six.string_types) else x,
ValueTypeEnum.DATE: lambda x: (x.decode() if not isinstance(x, six.string_types) else x).split()[0],
ValueTypeEnum.DATETIME: lambda x: x.decode() if not isinstance(x, six.string_types) else x,
ValueTypeEnum.JSON: lambda x: json.loads(x) if isinstance(x, six.string_types) and x else x,
}
choice = {
ValueTypeEnum.INT: model.IntegerChoice,
ValueTypeEnum.FLOAT: model.FloatChoice,
ValueTypeEnum.TEXT: model.TextChoice,
ValueTypeEnum.TIME: model.TextChoice,
}
table = {
ValueTypeEnum.TEXT: model.CIValueText,
ValueTypeEnum.JSON: model.CIValueJson,
'index_{0}'.format(ValueTypeEnum.INT): model.CIIndexValueInteger,
'index_{0}'.format(ValueTypeEnum.TEXT): model.CIIndexValueText,
'index_{0}'.format(ValueTypeEnum.DATETIME): model.CIIndexValueDateTime,
'index_{0}'.format(ValueTypeEnum.DATE): model.CIIndexValueDateTime,
'index_{0}'.format(ValueTypeEnum.TIME): model.CIIndexValueText,
'index_{0}'.format(ValueTypeEnum.FLOAT): model.CIIndexValueFloat,
'index_{0}'.format(ValueTypeEnum.JSON): model.CIValueJson,
}
table_name = {
ValueTypeEnum.TEXT: 'c_value_texts',
ValueTypeEnum.JSON: 'c_value_json',
'index_{0}'.format(ValueTypeEnum.INT): 'c_value_index_integers',
'index_{0}'.format(ValueTypeEnum.TEXT): 'c_value_index_texts',
'index_{0}'.format(ValueTypeEnum.DATETIME): 'c_value_index_datetime',
'index_{0}'.format(ValueTypeEnum.DATE): 'c_value_index_datetime',
'index_{0}'.format(ValueTypeEnum.TIME): 'c_value_index_texts',
'index_{0}'.format(ValueTypeEnum.FLOAT): 'c_value_index_floats',
'index_{0}'.format(ValueTypeEnum.JSON): 'c_value_json',
}
es_type = {
ValueTypeEnum.INT: 'long',
ValueTypeEnum.TEXT: 'text',
ValueTypeEnum.DATETIME: 'text',
ValueTypeEnum.DATE: 'text',
ValueTypeEnum.TIME: 'text',
ValueTypeEnum.FLOAT: 'float',
ValueTypeEnum.JSON: 'object'
}
class TableMap(object):
def __init__(self, attr_name=None, attr=None, is_index=None):
self.attr_name = attr_name
self.attr = attr
self.is_index = is_index
@property
def table(self):
attr = AttributeCache.get(self.attr_name) if not self.attr else self.attr
if attr.value_type != ValueTypeEnum.TEXT and attr.value_type != ValueTypeEnum.JSON:
self.is_index = True
elif self.is_index is None:
self.is_index = attr.is_index
i = "index_{0}".format(attr.value_type) if self.is_index else attr.value_type
return ValueTypeMap.table.get(i)
@property
def table_name(self):
attr = AttributeCache.get(self.attr_name) if not self.attr else self.attr
if attr.value_type != ValueTypeEnum.TEXT and attr.value_type != ValueTypeEnum.JSON:
self.is_index = True
elif self.is_index is None:
self.is_index = attr.is_index
i = "index_{0}".format(attr.value_type) if self.is_index else attr.value_type
return ValueTypeMap.table_name.get(i)

View File

@ -0,0 +1,297 @@
# -*- coding:utf-8 -*-
from __future__ import unicode_literals
import copy
import imp
import os
import tempfile
import jinja2
from flask import abort
from flask import current_app
from jinja2schema import infer
from jinja2schema import to_json_schema
from api.extensions import db
from api.lib.cmdb.attribute import AttributeManager
from api.lib.cmdb.cache import AttributeCache
from api.lib.cmdb.cache import CITypeAttributeCache
from api.lib.cmdb.const import OperateType
from api.lib.cmdb.const import ValueTypeEnum
from api.lib.cmdb.history import AttributeHistoryManger
from api.lib.cmdb.resp_format import ErrFormat
from api.lib.cmdb.utils import TableMap
from api.lib.cmdb.utils import ValueTypeMap
from api.lib.utils import handle_arg_list
from api.models.cmdb import CI
class AttributeValueManager(object):
"""
manage CI attribute values
"""
def __init__(self):
pass
@staticmethod
def _get_attr(key):
"""
:param key: id, name or alias
:return: attribute instance
"""
return AttributeCache.get(key)
def get_attr_values(self, fields, ci_id, ret_key="name", unique_key=None, use_master=False):
"""
:param fields:
:param ci_id:
:param ret_key: It can be name or alias
:param unique_key: primary attribute
:param use_master: Only for master-slave read-write separation
:return:
"""
res = dict()
for field in fields:
attr = self._get_attr(field)
if not attr:
continue
value_table = TableMap(attr=attr).table
rs = value_table.get_by(ci_id=ci_id,
attr_id=attr.id,
use_master=use_master,
to_dict=False)
field_name = getattr(attr, ret_key)
if attr.is_list:
res[field_name] = [ValueTypeMap.serialize[attr.value_type](i.value) for i in rs]
else:
res[field_name] = ValueTypeMap.serialize[attr.value_type](rs[0].value) if rs else None
if unique_key is not None and attr.id == unique_key.id and rs:
res['unique'] = unique_key.name
res['unique_alias'] = unique_key.alias
return res
@staticmethod
def _deserialize_value(value_type, value):
if not value:
return value
deserialize = ValueTypeMap.deserialize[value_type]
try:
v = deserialize(value)
return v
except ValueError:
return abort(400, ErrFormat.attribute_value_invalid.format(value))
@staticmethod
def _check_is_choice(attr, value_type, value):
choice_values = AttributeManager.get_choice_values(attr.id, value_type, attr.choice_web_hook, attr.choice_other)
if str(value) not in list(map(str, [i[0] for i in choice_values])):
return abort(400, ErrFormat.not_in_choice_values.format(value))
@staticmethod
def _check_is_unique(value_table, attr, ci_id, type_id, value):
existed = db.session.query(value_table.attr_id).join(CI, CI.id == value_table.ci_id).filter(
CI.type_id == type_id).filter(
value_table.attr_id == attr.id).filter(value_table.deleted.is_(False)).filter(
value_table.value == value).filter(value_table.ci_id != ci_id).first()
existed and abort(400, ErrFormat.attribute_value_unique_required.format(attr.alias, value))
@staticmethod
def _check_is_required(type_id, attr, value, type_attr=None):
type_attr = type_attr or CITypeAttributeCache.get(type_id, attr.id)
if type_attr and type_attr.is_required and not value and value != 0:
return abort(400, ErrFormat.attribute_value_required.format(attr.alias))
def _validate(self, attr, value, value_table, ci=None, type_id=None, ci_id=None, type_attr=None):
ci = ci or {}
v = self._deserialize_value(attr.value_type, value)
attr.is_choice and value and self._check_is_choice(attr, attr.value_type, v)
attr.is_unique and self._check_is_unique(
value_table, attr, ci and ci.id or ci_id, ci and ci.type_id or type_id, v)
self._check_is_required(ci and ci.type_id or type_id, attr, v, type_attr=type_attr)
if v == "" and attr.value_type not in (ValueTypeEnum.TEXT,):
v = None
return v
@staticmethod
def _write_change(ci_id, attr_id, operate_type, old, new, record_id, type_id):
return AttributeHistoryManger.add(record_id, ci_id, [(attr_id, operate_type, old, new)], type_id)
@staticmethod
def _write_change2(changed):
record_id = None
for ci_id, attr_id, operate_type, old, new, type_id in changed:
record_id = AttributeHistoryManger.add(record_id, ci_id, [(attr_id, operate_type, old, new)], type_id,
commit=False, flush=False)
try:
db.session.commit()
except Exception as e:
db.session.rollback()
current_app.logger.error("write change failed: {}".format(str(e)))
return record_id
@staticmethod
def _compute_attr_value_from_expr(expr, ci_dict):
t = jinja2.Template(expr).render(ci_dict)
try:
return eval(t)
except Exception as e:
current_app.logger.warning(str(e))
return t
@staticmethod
def _compute_attr_value_from_script(script, ci_dict):
script = jinja2.Template(script).render(ci_dict)
script_f = tempfile.NamedTemporaryFile(delete=False, suffix=".py")
script_f.write(script.encode('utf-8'))
script_f.close()
try:
path = script_f.name
dir_name, name = os.path.dirname(path), os.path.basename(path)[:-3]
fp, path, desc = imp.find_module(name, [dir_name])
mod = imp.load_module(name, fp, path, desc)
if hasattr(mod, 'computed'):
return mod.computed()
except Exception as e:
current_app.logger.error(str(e))
finally:
os.remove(script_f.name)
@staticmethod
def _jinja2_parse(content):
schema = to_json_schema(infer(content))
return [var for var in schema.get("properties")]
def _compute_attr_value(self, attr, payload, ci_id):
attrs = (self._jinja2_parse(attr['compute_expr']) if attr.get('compute_expr')
else self._jinja2_parse(attr['compute_script']))
not_existed = [i for i in attrs if i not in payload]
if ci_id is not None:
payload.update(self.get_attr_values(not_existed, ci_id))
if attr['compute_expr']:
return self._compute_attr_value_from_expr(attr['compute_expr'], payload)
elif attr['compute_script']:
return self._compute_attr_value_from_script(attr['compute_script'], payload)
def handle_ci_compute_attributes(self, ci_dict, computed_attrs, ci):
payload = copy.deepcopy(ci_dict)
for attr in computed_attrs:
computed_value = self._compute_attr_value(attr, payload, ci and ci.id)
if computed_value is not None:
ci_dict[attr['name']] = computed_value
def valid_attr_value(self, ci_dict, type_id, ci_id, name2attr, alias2attr=None, ci_attr2type_attr=None):
key2attr = dict()
alias2attr = alias2attr or {}
ci_attr2type_attr = ci_attr2type_attr or {}
for key, value in ci_dict.items():
attr = name2attr.get(key) or alias2attr.get(key)
key2attr[key] = attr
value_table = TableMap(attr=attr).table
try:
if attr.is_list:
value_list = [self._validate(attr, i, value_table, ci=None, type_id=type_id, ci_id=ci_id,
type_attr=ci_attr2type_attr.get(attr.id))
for i in handle_arg_list(value)]
ci_dict[key] = value_list
if not value_list:
self._check_is_required(type_id, attr, '')
else:
value = self._validate(attr, value, value_table, ci=None, type_id=type_id, ci_id=ci_id,
type_attr=ci_attr2type_attr.get(attr.id))
ci_dict[key] = value
except Exception as e:
current_app.logger.warning(str(e))
return abort(400, ErrFormat.attribute_value_invalid2.format(
"{}({})".format(attr.alias, attr.name), value))
return key2attr
def create_or_update_attr_value(self, ci, ci_dict, key2attr):
"""
add or update attribute value, then write history
:param ci: instance object
:param ci_dict: attribute dict
:param key2attr: attr key to attr
:return:
"""
changed = []
for key, value in ci_dict.items():
attr = key2attr.get(key)
if not attr:
continue # not be here
value_table = TableMap(attr=attr).table
if attr.is_list:
existed_attrs = value_table.get_by(attr_id=attr.id, ci_id=ci.id, to_dict=False)
existed_values = [i.value for i in existed_attrs]
added = set(value) - set(existed_values)
deleted = set(existed_values) - set(value)
for v in added:
value_table.create(ci_id=ci.id, attr_id=attr.id, value=v, flush=False, commit=False)
changed.append((ci.id, attr.id, OperateType.ADD, None, v, ci.type_id))
for v in deleted:
existed_attr = existed_attrs[existed_values.index(v)]
existed_attr.delete(flush=False, commit=False)
changed.append((ci.id, attr.id, OperateType.DELETE, v, None, ci.type_id))
else:
existed_attr = value_table.get_by(attr_id=attr.id, ci_id=ci.id, first=True, to_dict=False)
existed_value = existed_attr and existed_attr.value
if existed_value is None and value is not None:
value_table.create(ci_id=ci.id, attr_id=attr.id, value=value, flush=False, commit=False)
changed.append((ci.id, attr.id, OperateType.ADD, None, value, ci.type_id))
else:
if existed_value != value:
if value is None:
existed_attr.delete(flush=False, commit=False)
else:
existed_attr.update(value=value, flush=False, commit=False)
changed.append((ci.id, attr.id, OperateType.UPDATE, existed_value, value, ci.type_id))
try:
db.session.commit()
except Exception as e:
db.session.rollback()
current_app.logger.warning(str(e))
return abort(400, ErrFormat.attribute_value_unknown_error.format(str(e)))
return self._write_change2(changed)
@staticmethod
def delete_attr_value(attr_id, ci_id):
attr = AttributeCache.get(attr_id)
if attr is not None:
value_table = TableMap(attr=attr).table
for item in value_table.get_by(attr_id=attr.id, ci_id=ci_id, to_dict=False):
item.delete()

View File

@ -0,0 +1,116 @@
# -*- coding:utf-8 -*-
from flask import abort
from flask import current_app
from api.lib.common_setting.resp_format import ErrFormat
from api.lib.perm.acl.cache import RoleCache, AppCache
from api.lib.perm.acl.role import RoleCRUD, RoleRelationCRUD
from api.lib.perm.acl.user import UserCRUD
from api.lib.perm.acl.resource import ResourceTypeCRUD, ResourceCRUD
class ACLManager(object):
def __init__(self, app_name='acl', uid=None):
self.log = current_app.logger
self.app_name = app_name
self.uid = uid
@staticmethod
def get_all_users():
try:
numfound, users = UserCRUD.search(None, 1, 999999)
users = [i.to_dict() for i in users]
for u in users:
u.pop('password', None)
u.pop('key', None)
u.pop('secret', None)
return users
except Exception as e:
current_app.logger.error(str(e))
raise Exception(ErrFormat.acl_get_all_users_failed.format(str(e)))
@staticmethod
def create_user(payload):
user = UserCRUD.add(**payload)
return user.to_dict()
@staticmethod
def edit_user(uid, payload):
user = UserCRUD.update(uid, **payload)
return user.to_dict()
def get_all_roles(self):
numfound, roles = RoleCRUD.search(
None, self.app_name, 1, 999999, True, True, False)
return [i.to_dict() for i in roles]
def remove_user_from_role(self, user_rid, payload):
app_id = self.app_name
app = AppCache.get(app_id)
if app and app.name == "acl":
app_id = None # global
RoleRelationCRUD.delete2(
payload.get('parent_id'), user_rid, app_id)
return dict(
message="success"
)
def add_user_to_role(self, role_id, payload):
app_id = self.app_name
app = AppCache.get(self.app_name)
if app and app.name == "acl":
app_id = None
role = RoleCache.get(role_id)
res = RoleRelationCRUD.add(
role, role_id, payload['child_ids'], app_id)
return res
@staticmethod
def create_role(payload):
payload['is_app_admin'] = payload.get('is_app_admin', False)
role = RoleCRUD.add_role(**payload)
return role.to_dict()
@staticmethod
def edit_role(_id, payload):
role = RoleCRUD.update_role(_id, **payload)
return role.to_dict()
@staticmethod
def delete_role(_id, payload):
RoleCRUD.delete_role(_id)
return dict(rid=_id)
def get_user_info(self, username):
from api.lib.perm.acl.acl import ACLManager as ACL
user_info = ACL().get_user_info(username, self.app_name)
result = dict(name=user_info.get('nickname') or username,
username=user_info.get('username') or username,
email=user_info.get('email'),
uid=user_info.get('uid'),
rid=user_info.get('rid'),
role=dict(permissions=user_info.get('parents')),
avatar=user_info.get('avatar'))
return result
def validate_app(self):
return AppCache.get(self.app_name)
def get_all_resources_types(self, q=None, page=1, page_size=999999):
app_id = self.validate_app().id
numfound, res, id2perms = ResourceTypeCRUD.search(q, app_id, page, page_size)
return dict(
numfound=numfound,
groups=[i.to_dict() for i in res],
id2perms=id2perms
)
def create_resource(self, payload):
payload['app_id'] = self.validate_app().id
resource = ResourceCRUD.add(**payload)
return resource.to_dict()

View File

@ -0,0 +1,46 @@
from flask import abort
from api.extensions import db
from api.lib.common_setting.resp_format import ErrFormat
from api.models.common_setting import CommonData
class CommonDataCRUD(object):
@staticmethod
def get_data_by_type(data_type):
return CommonData.get_by(data_type=data_type)
@staticmethod
def get_data_by_id(_id, to_dict=True):
return CommonData.get_by(first=True, id=_id, to_dict=to_dict)
@staticmethod
def create_new_data(data_type, **kwargs):
try:
return CommonData.create(data_type=data_type, **kwargs)
except Exception as e:
db.session.rollback()
abort(400, str(e))
@staticmethod
def update_data(_id, **kwargs):
existed = CommonDataCRUD.get_data_by_id(_id, to_dict=False)
if not existed:
abort(404, ErrFormat.common_data_not_found.format(_id))
try:
return existed.update(**kwargs)
except Exception as e:
db.session.rollback()
abort(400, str(e))
@staticmethod
def delete(_id):
existed = CommonDataCRUD.get_data_by_id(_id, to_dict=False)
if not existed:
abort(404, ErrFormat.common_data_not_found.format(_id))
try:
existed.soft_delete()
except Exception as e:
db.session.rollback()
abort(400, str(e))

View File

@ -0,0 +1,44 @@
# -*- coding:utf-8 -*-
from api.extensions import cache
from api.models.common_setting import CompanyInfo
class CompanyInfoCRUD(object):
@staticmethod
def get():
return CompanyInfo.get_by(first=True) or {}
@staticmethod
def create(**kwargs):
res = CompanyInfo.create(**kwargs)
CompanyInfoCache.refresh(res.info)
return res
@staticmethod
def update(_id, **kwargs):
kwargs.pop('id', None)
existed = CompanyInfo.get_by_id(_id)
if not existed:
existed = CompanyInfoCRUD.create(**kwargs)
else:
existed = existed.update(**kwargs)
CompanyInfoCache.refresh(existed.info)
return existed
class CompanyInfoCache(object):
key = 'CompanyInfoCache::'
@classmethod
def get(cls):
info = cache.get(cls.key)
if not info:
res = CompanyInfo.get_by(first=True) or {}
info = res.get('info', {})
cache.set(cls.key, info)
return info
@classmethod
def refresh(cls, info):
cache.set(cls.key, info)

View File

@ -0,0 +1,21 @@
from api.lib.common_setting.utils import BaseEnum
COMMON_SETTING_QUEUE = "common_setting_async"
class OperatorType(BaseEnum):
EQUAL = 1
NOT_EQUAL = 2
IN = 3
NOT_IN = 4
GREATER_THAN = 5
LESS_THAN = 6
IS_EMPTY = 7
IS_NOT_EMPTY = 8
BotNameMap = {
'wechatApp': 'wechatBot',
'feishuApp': 'feishuBot',
'dingdingApp': 'dingdingBot',
}

View File

@ -0,0 +1,395 @@
# -*- coding:utf-8 -*-
from flask import abort
from treelib import Tree
from wtforms import Form
from wtforms import IntegerField
from wtforms import StringField
from wtforms import validators
from api.extensions import db
from api.lib.common_setting.resp_format import ErrFormat
from api.lib.perm.acl.role import RoleCRUD
from api.models.common_setting import Department, Employee
sub_departments_column_name = 'sub_departments'
def get_all_department_list(to_dict=True):
criterion = [
Department.deleted == 0,
]
query = Department.query.filter(
*criterion
).order_by(Department.department_id.asc())
results = query.all()
return [r.to_dict() for r in results] if to_dict else results
def get_all_employee_list(block=0, to_dict=True):
criterion = [
Employee.deleted == 0,
]
if block >= 0:
criterion.append(
Employee.block == block
)
results = db.session.query(Employee).filter(*criterion).all()
DepartmentTreeEmployeeColumns = [
'acl_rid',
'employee_id',
'username',
'nickname',
'email',
'mobile',
'direct_supervisor_id',
'block',
'department_id',
]
def format_columns(e):
return {column: getattr(e, column) for column in DepartmentTreeEmployeeColumns}
return [format_columns(r) for r in results] if to_dict else results
class DepartmentTree(object):
def __init__(self, append_employee=False, block=-1):
self.append_employee = append_employee
self.block = block
self.all_department_list = get_all_department_list()
self.all_employee_list = get_all_employee_list(
block) if append_employee else None
def prepare(self):
pass
def get_employees_by_d_id(self, d_id):
block = self.block
def filter_department_id(e):
if self.block != -1:
return e['department_id'] == d_id and e['block'] == block
return e.department_id == d_id
results = list(filter(lambda e: filter_department_id(e), self.all_employee_list))
return results
def get_department_by_parent_id(self, parent_id):
results = list(filter(lambda d: d['department_parent_id'] == parent_id, self.all_department_list))
if not results:
return []
return results
def get_tree_departments(self):
# 一级部门
top_departments = self.get_department_by_parent_id(-1)
if len(top_departments) == 0:
return []
d_list = []
for top_d in top_departments:
department_id = top_d['department_id']
sub_deps = self.get_department_by_parent_id(department_id)
employees = []
if self.append_employee:
employees = self.get_employees_by_d_id(department_id)
top_d['employees'] = employees
if len(sub_deps) == 0:
top_d[sub_departments_column_name] = []
d_list.append(top_d)
continue
self.parse_sub_department(sub_deps, top_d)
d_list.append(top_d)
return d_list
def get_all_departments(self, is_tree=1):
if len(self.all_department_list) == 0:
return []
if is_tree != 1:
return self.all_department_list
return self.get_tree_departments()
def parse_sub_department(self, deps, top_d):
sub_departments = []
for d in deps:
sub_deps = self.get_department_by_parent_id(d['department_id'])
employees = []
if self.append_employee:
employees = self.get_employees_by_d_id(d['department_id'])
d['employees'] = employees
if len(sub_deps) == 0:
d[sub_departments_column_name] = []
sub_departments.append(d)
continue
self.parse_sub_department(sub_deps, d)
sub_departments.append(d)
top_d[sub_departments_column_name] = sub_departments
class DepartmentForm(Form):
department_name = StringField(validators=[
validators.DataRequired(message="部门名称不能为空"),
validators.Length(max=255),
])
department_director_id = IntegerField(validators=[], default=0)
department_parent_id = IntegerField(validators=[], default=1)
class DepartmentCRUD(object):
@staticmethod
def add(**kwargs):
DepartmentCRUD.check_department_name_unique(kwargs['department_name'])
department_parent_id = kwargs.get('department_parent_id', 0)
DepartmentCRUD.check_department_parent_id(department_parent_id)
DepartmentCRUD.check_department_parent_id_allow(
-1, department_parent_id)
try:
role = RoleCRUD.add_role(name=kwargs['department_name'])
except Exception as e:
return abort(400, ErrFormat.acl_add_role_failed.format(str(e)))
kwargs['acl_rid'] = role.id
try:
db_department = Department.create(
**kwargs
)
except Exception as e:
return abort(400, str(e))
return db_department
@staticmethod
def check_department_parent_id_allow(d_id, department_parent_id):
if department_parent_id == 0:
return
allow_p_d_id_list = DepartmentCRUD.get_allow_parent_d_id_by(d_id)
target = list(
filter(lambda d: d['department_id'] == department_parent_id, allow_p_d_id_list))
if len(target) == 0:
try:
d = Department.get_by(
first=True, to_dict=False, department_id=department_parent_id)
name = d.department_name if d else ErrFormat.department_id_not_found.format(department_parent_id)
except Exception as e:
name = ErrFormat.department_id_not_found.format(department_parent_id)
abort(400, ErrFormat.cannot_to_be_parent_department.format(name))
@staticmethod
def check_department_parent_id(department_parent_id):
if int(department_parent_id) < 0:
abort(400, ErrFormat.parent_department_id_must_more_than_zero)
@staticmethod
def check_department_name_unique(name, _id=0):
criterion = [
Department.department_name == name,
Department.deleted == 0,
]
if _id > 0:
criterion.append(
Department.department_id != _id
)
res = Department.query.filter(
*criterion
).all()
res and abort(
400, ErrFormat.department_name_already_exists.format(name)
)
@staticmethod
def edit(_id, **kwargs):
DepartmentCRUD.check_department_name_unique(
kwargs['department_name'], _id)
kwargs.pop('department_id', None)
existed = Department.get_by(
first=True, department_id=_id, to_dict=False)
if not existed:
abort(404, ErrFormat.department_id_not_found.format(_id))
department_parent_id = kwargs.get('department_parent_id', 0)
DepartmentCRUD.check_department_parent_id(department_parent_id)
if department_parent_id > 0:
DepartmentCRUD.check_department_parent_id_allow(
_id, department_parent_id)
try:
RoleCRUD.update_role(
existed.acl_rid, name=kwargs['department_name'])
except Exception as e:
return abort(400, ErrFormat.acl_update_role_failed.format(str(e)))
try:
existed.update(**kwargs)
except Exception as e:
return abort(400, str(e))
@staticmethod
def delete(_id):
existed = Department.get_by(
first=True, department_id=_id, to_dict=False)
if not existed:
abort(404, ErrFormat.department_id_not_found.format(_id))
try:
RoleCRUD.delete_role(existed.acl_rid)
except Exception as e:
pass
return existed.soft_delete()
@staticmethod
def get_allow_parent_d_id_by(department_id):
tree_list = DepartmentCRUD.get_department_tree_list()
allow_d_id_list = []
for tree in tree_list:
if department_id > 0:
try:
tree.remove_subtree(department_id)
except Exception as e:
pass
[allow_d_id_list.append({'department_id': int(n.identifier), 'department_name': n.tag}) for n in
tree.all_nodes()]
return allow_d_id_list
@staticmethod
def update_department_sort(department_list):
d_map = {d['id']: d['sort_value'] for d in department_list}
d_id = [d['id'] for d in department_list]
db_list = Department.query.filter(
Department.department_id.in_(d_id),
Department.deleted == 0
).all()
for existed in db_list:
existed.update(sort_value=d_map[existed.department_id])
return []
@staticmethod
def get_all_departments_with_employee(block):
return DepartmentTree(True, block).get_all_departments(1)
@staticmethod
def get_department_tree_list():
all_deps = get_all_department_list()
if len(all_deps) == 0:
return []
top_deps = list(filter(lambda d: d['department_parent_id'] == -1, all_deps))
if len(top_deps) == 0:
return []
tree_list = []
for top_d in top_deps:
tree = Tree()
identifier_root = top_d['department_id']
tree.create_node(
top_d['department_name'],
identifier_root
)
sub_ds = list(filter(lambda d: d['department_parent_id'] == identifier_root, all_deps))
if len(sub_ds) == 0:
tree_list.append(tree)
continue
DepartmentCRUD.parse_sub_department_node(
sub_ds, all_deps, tree, identifier_root)
tree_list.append(tree)
return tree_list
@staticmethod
def parse_sub_department_node(sub_ds, all_ds, tree, parent_id):
for d in sub_ds:
tree.create_node(
d['department_name'],
d['department_id'],
parent=parent_id
)
next_sub_ds = list(filter(lambda item_d: item_d['department_parent_id'] == d['department_id'], all_ds))
if len(next_sub_ds) == 0:
continue
DepartmentCRUD.parse_sub_department_node(
next_sub_ds, all_ds, tree, d['department_id'])
@staticmethod
def get_department_by_query(query, to_dict=True):
results = query.all()
if not results:
return []
return results if not to_dict else [r.to_dict() for r in results]
@staticmethod
def get_departments_and_ids(department_parent_id, block):
query = Department.query.filter(
Department.department_parent_id == department_parent_id,
Department.deleted == 0,
).order_by(Department.sort_value.asc())
all_departments = DepartmentCRUD.get_department_by_query(query)
if len(all_departments) == 0:
return [], []
tree_list = DepartmentCRUD.get_department_tree_list()
all_employee_list = get_all_employee_list(block)
department_id_list = [d['department_id'] for d in all_departments]
query = Department.query.filter(
Department.department_parent_id.in_(department_id_list),
Department.deleted == 0,
).order_by(Department.sort_value.asc()).group_by(Department.department_id)
sub_deps = DepartmentCRUD.get_department_by_query(query)
sub_map = {d['department_parent_id']: 1 for d in sub_deps}
for d in all_departments:
d['has_sub'] = sub_map.get(d['department_id'], 0)
d_ids = DepartmentCRUD.get_department_id_list_by_root(d['department_id'], tree_list)
d['employee_count'] = len(list(filter(lambda e: e['department_id'] in d_ids, all_employee_list)))
return all_departments, department_id_list
@staticmethod
def get_department_id_list_by_root(root_department_id, tree_list=None):
if tree_list is None:
tree_list = DepartmentCRUD.get_department_tree_list()
id_list = []
for tree in tree_list:
try:
tmp_tree = tree.subtree(root_department_id)
[id_list.append(int(n.identifier))
for n in tmp_tree.all_nodes()]
except Exception as e:
pass
return id_list

View File

@ -0,0 +1,734 @@
# -*- coding:utf-8 -*-
import copy
import traceback
from datetime import datetime
import requests
from flask import abort
from flask_login import current_user
from sqlalchemy import or_, literal_column, func, not_, and_
from werkzeug.datastructures import MultiDict
from wtforms import Form
from wtforms import IntegerField
from wtforms import StringField
from wtforms import validators
from api.extensions import db
from api.lib.common_setting.acl import ACLManager
from api.lib.common_setting.const import COMMON_SETTING_QUEUE, OperatorType
from api.lib.common_setting.resp_format import ErrFormat
from api.models.common_setting import Employee, Department
acl_user_columns = [
'email',
'mobile',
'nickname',
'username',
'password',
'block',
'avatar',
]
employee_pop_columns = ['password']
can_not_edit_columns = ['email']
def edit_acl_user(uid, **kwargs):
user_data = {column: kwargs.get(
column, '') for column in acl_user_columns if kwargs.get(column, '')}
if 'block' in kwargs:
user_data['block'] = kwargs.get('block')
try:
acl = ACLManager()
return acl.edit_user(uid, user_data)
except Exception as e:
abort(400, ErrFormat.acl_edit_user_failed.format(str(e)))
def get_block_value(value):
if value in ['False', 'false', '0', 0]:
value = False
else:
value = True
return value
def get_employee_list_by_direct_supervisor_id(direct_supervisor_id):
return Employee.get_by(direct_supervisor_id=direct_supervisor_id)
def get_department_list_by_director_id(director_id):
return Department.get_by(department_director_id=director_id)
def raise_exception(err):
raise Exception(err)
def check_department_director_id_or_direct_supervisor_id(_id):
get_employee_list_by_direct_supervisor_id(
_id) and raise_exception(ErrFormat.cannot_block_this_employee_is_other_direct_supervisor)
get_department_list_by_director_id(
_id) and raise_exception(ErrFormat.cannot_block_this_employee_is_department_manager)
class EmployeeCRUD(object):
@staticmethod
def get_employee_by_id(_id):
return Employee.get_by(
first=True, to_dict=False, deleted=0, employee_id=_id
) or abort(404, ErrFormat.employee_id_not_found.format(_id))
@staticmethod
def get_employee_by_uid_with_create(_uid):
try:
return EmployeeCRUD.get_employee_by_uid(_uid).to_dict()
except Exception as e:
if '不存在' not in str(e):
abort(400, str(e))
try:
acl = ACLManager('acl')
user_info = acl.get_user_info(_uid)
return EmployeeCRUD.check_acl_user_and_create(user_info)
except Exception as e:
abort(400, str(e))
@staticmethod
def get_employee_by_uid(_uid):
return Employee.get_by(
first=True, to_dict=False, deleted=0, acl_uid=_uid
) or abort(404, ErrFormat.acl_uid_not_found.format(_uid))
@staticmethod
def check_acl_user_and_create(user_info):
existed = Employee.get_by(
first=True, to_dict=False, username=user_info['username'])
if existed:
existed.update(
acl_uid=user_info['uid'],
)
return existed.to_dict()
if not user_info.get('nickname', None):
user_info['nickname'] = user_info['name']
form = EmployeeAddForm(MultiDict(user_info))
data = form.data
data['password'] = ''
data['acl_uid'] = user_info['uid']
employee = CreateEmployee().create_single(**data)
return employee.to_dict()
@staticmethod
def add(**kwargs):
try:
return CreateEmployee().create_single(**kwargs)
except Exception as e:
abort(400, str(e))
@staticmethod
def update(_id, **kwargs):
EmployeeCRUD.check_email_unique(kwargs['email'], _id)
existed = EmployeeCRUD.get_employee_by_id(_id)
try:
edit_acl_user(existed.acl_uid, **kwargs)
for column in employee_pop_columns:
kwargs.pop(column, None)
new_department_id = kwargs.get('department_id', None)
e_list = []
if new_department_id is not None and new_department_id != existed.department_id:
e_list = [dict(
e_acl_rid=existed.acl_rid,
department_id=existed.department_id
)]
existed.update(**kwargs)
if len(e_list) > 0:
from api.tasks.common_setting import edit_employee_department_in_acl
edit_employee_department_in_acl.apply_async(
args=(e_list, new_department_id, current_user.uid),
queue=COMMON_SETTING_QUEUE
)
return existed
except Exception as e:
return abort(400, str(e))
@staticmethod
def edit_employee_by_uid(_uid, **kwargs):
existed = EmployeeCRUD.get_employee_by_uid(_uid)
try:
user = edit_acl_user(_uid, **kwargs)
for column in employee_pop_columns:
if kwargs.get(column):
kwargs.pop(column)
return existed.update(**kwargs)
except Exception as e:
return abort(400, str(e))
@staticmethod
def change_password_by_uid(_uid, password):
existed = EmployeeCRUD.get_employee_by_uid(_uid)
try:
user = edit_acl_user(_uid, password=password)
except Exception as e:
return abort(400, str(e))
@staticmethod
def get_all_position():
criterion = [
Employee.deleted == 0,
]
results = Employee.query.with_entities(
Employee.position_name
).filter(*criterion).group_by(
Employee.position_name
).order_by(
func.CONVERT(literal_column('position_name using gbk'))
).all()
return [item[0] for item in results if (item[0] is not None and item[0] != '')]
@staticmethod
def get_employee_count(block_status):
criterion = [
Employee.deleted == 0
]
if block_status >= 0:
criterion.append(
Employee.block == block_status
)
return Employee.query.filter(
*criterion
).count()
@staticmethod
def check_email_unique(email, _id=0):
criterion = [
Employee.email == email,
Employee.deleted == 0,
]
if _id > 0:
criterion.append(
Employee.employee_id != _id
)
res = Employee.query.filter(
*criterion
).all()
if res:
err = ErrFormat.email_already_exists.format(email)
raise Exception(err)
@staticmethod
def get_employee_list_by_body(department_id, block_status, search='', order='', conditions=None, page=1,
page_size=10):
criterion = [
Employee.deleted == 0
]
if block_status >= 0:
criterion.append(
Employee.block == block_status
)
if len(search) > 0:
search_key = f"%{search}%"
criterion.append(
or_(
Employee.email.like(search_key),
Employee.username.like(search_key),
Employee.nickname.like(search_key)
)
)
if department_id > 0:
from api.lib.common_setting.department import DepartmentCRUD
department_id_list = DepartmentCRUD.get_department_id_list_by_root(
department_id)
criterion.append(
Employee.department_id.in_(department_id_list)
)
if conditions:
query = EmployeeCRUD.parse_condition_list_to_query(conditions).filter(
*criterion
)
else:
query = db.session.query(Employee, Department).outerjoin(Department).filter(
*criterion
)
if len(order) > 0:
query = EmployeeCRUD.format_query_sort(query, order)
pagination = query.paginate(page=page, per_page=page_size)
employees = []
for r in pagination.items:
d = r.Employee.to_dict()
d['department_name'] = r.Department.department_name
employees.append(d)
return {
'data_list': employees,
'page': page,
'page_size': page_size,
'total': pagination.total,
}
@staticmethod
def parse_condition_list_to_query(condition_list):
query = db.session.query(Employee, Department).outerjoin(Department)
query = EmployeeCRUD.get_query_by_conditions(query, condition_list)
return query
@staticmethod
def get_expr_by_condition(column, operator, value, relation):
"""
get expr: (and_list, or_list)
"""
attr = EmployeeCRUD.get_attr_by_column(column)
# 根据operator生成条件表达式
if operator == OperatorType.EQUAL:
expr = [attr == value]
elif operator == OperatorType.NOT_EQUAL:
expr = [attr != value]
elif operator == OperatorType.IN:
expr = [attr.like('%{}%'.format(value))]
elif operator == OperatorType.NOT_IN:
expr = [not_(attr.like('%{}%'.format(value)))]
elif operator == OperatorType.GREATER_THAN:
expr = [attr > value]
elif operator == OperatorType.LESS_THAN:
expr = [attr < value]
elif operator == OperatorType.IS_EMPTY:
if value:
abort(400, ErrFormat.query_column_none_keep_value_empty.format(column))
expr = [attr.is_(None)]
if column not in ["last_login"]:
expr += [attr == '']
expr = [or_(*expr)]
elif operator == OperatorType.IS_NOT_EMPTY:
if value:
abort(400, ErrFormat.query_column_none_keep_value_empty.format(column))
expr = [attr.isnot(None)]
if column not in ["last_login"]:
expr += [attr != '']
expr = [and_(*expr)]
else:
abort(400, ErrFormat.not_support_operator.format(operator))
if relation == "&":
return expr, []
elif relation == "|":
return [], expr
else:
return abort(400, ErrFormat.not_support_relation.format(relation))
@staticmethod
def check_condition(column, operator, value, relation):
if column is None or operator is None or relation is None:
return abort(400, ErrFormat.conditions_field_missing)
if value and column == "last_login":
try:
value = datetime.strptime(value, "%Y-%m-%d %H:%M:%S")
except Exception as e:
abort(400, ErrFormat.datetime_format_error.format(column))
@staticmethod
def get_attr_by_column(column):
if 'department' in column:
attr = Department.__dict__[column]
else:
attr = Employee.__dict__[column]
return attr
@staticmethod
def get_query_by_conditions(query, conditions):
and_list = []
or_list = []
for condition in conditions:
operator = condition.get("operator", None)
column = condition.get("column", None)
relation = condition.get("relation", None)
value = condition.get("value", None)
EmployeeCRUD.check_condition(column, operator, value, relation)
a, o = EmployeeCRUD.get_expr_by_condition(
column, operator, value, relation)
and_list += a
or_list += o
query = query.filter(
Employee.deleted == 0,
or_(and_(*and_list), *or_list)
)
return query
@staticmethod
def get_employee_list_by(department_id, block_status, search='', order='', page=1, page_size=10):
criterion = [
Employee.deleted == 0
]
if block_status >= 0:
criterion.append(
Employee.block == block_status
)
if len(search) > 0:
search_key = f"%{search}%"
criterion.append(
or_(
Employee.email.like(search_key),
Employee.username.like(search_key),
Employee.nickname.like(search_key)
)
)
if department_id > 0:
from api.lib.common_setting.department import DepartmentCRUD
department_id_list = DepartmentCRUD.get_department_id_list_by_root(
department_id)
criterion.append(
Employee.department_id.in_(department_id_list)
)
query = db.session.query(Employee, Department).outerjoin(Department).filter(
*criterion
)
if len(order) > 0:
query = EmployeeCRUD.format_query_sort(query, order)
pagination = query.paginate(page=page, per_page=page_size)
employees = []
for r in pagination.items:
d = r.Employee.to_dict()
d['department_name'] = r.Department.department_name
employees.append(d)
return {
'data_list': employees,
'page': page,
'page_size': page_size,
'total': pagination.total,
}
@staticmethod
def format_query_sort(query, order):
order_list = order.split(',')
all_columns = Employee.get_columns()
for order_column in order_list:
if order_column.startswith('-'):
target_column = order_column[1:]
if target_column not in all_columns:
continue
query = query.order_by(getattr(Employee, target_column).desc())
else:
if order_column not in all_columns:
continue
query = query.order_by(getattr(Employee, order_column).asc())
return query
@staticmethod
def get_employees_by_department_id(department_id, block):
criterion = [
Employee.deleted == 0,
Employee.block == block,
]
if type(department_id) == list:
if len(department_id) == 0:
return []
else:
criterion.append(
Employee.department_id.in_(department_id)
)
else:
criterion.append(
Employee.department_id == department_id
)
results = Employee.query.filter(
*criterion
).all()
return [r.to_dict() for r in results]
@staticmethod
def remove_bind_notice_by_uid(_platform, _uid):
existed = EmployeeCRUD.get_employee_by_uid(_uid)
employee_data = existed.to_dict()
notice_info = employee_data.get('notice_info', {})
notice_info = copy.deepcopy(notice_info) if notice_info else {}
notice_info[_platform] = ''
existed.update(
notice_info=notice_info
)
return ErrFormat.notice_remove_bind_success
@staticmethod
def bind_notice_by_uid(_platform, _uid):
existed = EmployeeCRUD.get_employee_by_uid(_uid)
mobile = existed.mobile
if not mobile or len(mobile) == 0:
abort(400, ErrFormat.notice_bind_err_with_empty_mobile)
from api.lib.common_setting.notice_config import NoticeConfigCRUD
messenger = NoticeConfigCRUD.get_messenger_url()
if not messenger or len(messenger) == 0:
abort(400, ErrFormat.notice_please_config_messenger_first)
url = f"{messenger}/v1/uid/getbyphone"
try:
payload = dict(
phone=mobile,
sender=_platform
)
res = requests.post(url, json=payload)
result = res.json()
if res.status_code != 200:
raise Exception(result.get('msg', ''))
target_id = result.get('uid', '')
employee_data = existed.to_dict()
notice_info = employee_data.get('notice_info', {})
notice_info = copy.deepcopy(notice_info) if notice_info else {}
notice_info[_platform] = '' if not target_id else target_id
existed.update(
notice_info=notice_info
)
return ErrFormat.notice_bind_success
except Exception as e:
return abort(400, ErrFormat.notice_bind_failed.format(str(e)))
@staticmethod
def get_employee_notice_by_ids(employee_ids):
criterion = [
Employee.employee_id.in_(employee_ids),
Employee.deleted == 0,
]
direct_columns = ['email', 'mobile']
employees = Employee.query.filter(
*criterion
).all()
results = []
for employee in employees:
d = employee.to_dict()
tmp = dict(
employee_id=employee.employee_id,
)
for column in direct_columns:
tmp[column] = d.get(column, '')
notice_info = d.get('notice_info', {})
tmp.update(**notice_info)
results.append(tmp)
return results
def get_user_map(key='uid', acl=None):
"""
{
uid: userinfo
}
"""
if acl is None:
acl = ACLManager()
data = {user[key]: user for user in acl.get_all_users()}
return data
def format_params(params):
for k in ['_key', '_secret']:
params.pop(k, None)
return params
class CreateEmployee(object):
def __init__(self):
self.acl = ACLManager()
self.all_acl_users = self.acl.get_all_users()
def check_acl_user(self, user_data):
target_email = list(filter(lambda x: x['email'] == user_data['email'], self.all_acl_users))
if target_email:
return target_email[0]
target_username = list(filter(lambda x: x['username'] == user_data['username'], self.all_acl_users))
if target_username:
return target_username[0]
def add_acl_user(self, **kwargs):
user_data = {column: kwargs.get(
column, '') for column in acl_user_columns if kwargs.get(column, '')}
try:
existed = self.check_acl_user(user_data)
if not existed:
return self.acl.create_user(user_data)
return existed
except Exception as e:
abort(400, ErrFormat.acl_add_user_failed.format(str(e)))
def create_single(self, **kwargs):
EmployeeCRUD.check_email_unique(kwargs['email'])
user = self.add_acl_user(**kwargs)
kwargs['acl_uid'] = user['uid']
kwargs['last_login'] = user['last_login']
for column in employee_pop_columns:
kwargs.pop(column)
return Employee.create(
**kwargs
)
def create_single_with_import(self, **kwargs):
user = self.add_acl_user(**kwargs)
kwargs['acl_uid'] = user['uid']
kwargs['last_login'] = user['last_login']
for column in employee_pop_columns:
kwargs.pop(column)
existed = Employee.get_by(
first=True, to_dict=False, deleted=0, acl_uid=user['uid']
)
if existed:
return existed
return Employee.create(
**kwargs
)
def get_department_by_name(self, d_name):
return Department.get_by(first=True, department_name=d_name)
def get_end_department_id(self, department_name_list, department_name_map):
parent_id = 0
end_d_id = 0
for d_name in department_name_list:
tmp_d = self.get_department_by_name(d_name)
if not tmp_d:
tmp_d = Department.create(
department_name=d_name, department_parent_id=parent_id).to_dict()
else:
if tmp_d['department_parent_id'] != parent_id:
department_name_map[d_name] = tmp_d
raise Exception(ErrFormat.department_level_relation_error)
department_name_map[d_name] = tmp_d
end_d_id = tmp_d['department_id']
parent_id = tmp_d['department_id']
return end_d_id
def format_department_id(self, employee):
department_name_map = {}
try:
department_name = employee.get('department_name', '')
if len(department_name) == 0:
return employee
department_name_list = department_name.split('/')
employee['department_id'] = self.get_end_department_id(
department_name_list, department_name_map)
except Exception as e:
employee['err'] = str(e)
return employee
def batch_create(self, employee_list):
err_list = []
for employee in employee_list:
try:
username = employee.get('username', None)
if username is None:
employee['username'] = employee['email']
employee = self.format_department_id(employee)
err = employee.get('err', None)
if err:
raise Exception(err)
params = format_params(employee)
form = EmployeeAddForm(MultiDict(params))
if not form.validate():
raise Exception(
','.join(['{}: {}'.format(filed, ','.join(msg)) for filed, msg in form.errors.items()]))
self.create_single_with_import(**form.data)
except Exception as e:
err_list.append({
'email': employee.get('email', ''),
'nickname': employee.get('nickname', ''),
'err': str(e),
})
traceback.print_exc()
return err_list
class EmployeeAddForm(Form):
username = StringField(validators=[
validators.DataRequired(message=ErrFormat.username_is_required),
validators.Length(max=255),
])
email = StringField(validators=[
validators.DataRequired(message=ErrFormat.email_is_required),
validators.Email(message=ErrFormat.email_format_error),
validators.Length(max=255),
])
password = StringField(validators=[
validators.Length(max=255),
])
position_name = StringField(validators=[])
nickname = StringField(validators=[
validators.DataRequired(message=ErrFormat.nickname_is_required),
validators.Length(max=255),
])
sex = StringField(validators=[])
mobile = StringField(validators=[])
department_id = IntegerField(validators=[], default=0)
direct_supervisor_id = IntegerField(validators=[], default=0)
class EmployeeUpdateByUidForm(Form):
nickname = StringField(validators=[
validators.DataRequired(message=ErrFormat.nickname_is_required),
validators.Length(max=255),
])
avatar = StringField(validators=[])
sex = StringField(validators=[])
mobile = StringField(validators=[])

View File

@ -0,0 +1,165 @@
import requests
from api.lib.common_setting.const import BotNameMap
from api.lib.common_setting.resp_format import ErrFormat
from api.models.common_setting import CompanyInfo, NoticeConfig
from wtforms import Form
from wtforms import StringField
from wtforms import validators
from flask import abort, current_app
class NoticeConfigCRUD(object):
@staticmethod
def add_notice_config(**kwargs):
platform = kwargs.get('platform')
NoticeConfigCRUD.check_platform(platform)
info = kwargs.get('info', {})
if 'name' not in info:
info['name'] = platform
kwargs['info'] = info
try:
NoticeConfigCRUD.update_messenger_config(**info)
res = NoticeConfig.create(
**kwargs
)
return res
except Exception as e:
return abort(400, str(e))
@staticmethod
def check_platform(platform):
NoticeConfig.get_by(first=True, to_dict=False, platform=platform) and \
abort(400, ErrFormat.notice_platform_existed.format(platform))
@staticmethod
def edit_notice_config(_id, **kwargs):
existed = NoticeConfigCRUD.get_notice_config_by_id(_id)
try:
info = kwargs.get('info', {})
if 'name' not in info:
info['name'] = existed.platform
kwargs['info'] = info
NoticeConfigCRUD.update_messenger_config(**info)
res = existed.update(**kwargs)
return res
except Exception as e:
return abort(400, str(e))
@staticmethod
def get_messenger_url():
from api.lib.common_setting.company_info import CompanyInfoCache
com_info = CompanyInfoCache.get()
if not com_info:
return
messenger = com_info.get('messenger', '')
if len(messenger) == 0:
return
if messenger[-1] == '/':
messenger = messenger[:-1]
return messenger
@staticmethod
def update_messenger_config(**kwargs):
try:
messenger = NoticeConfigCRUD.get_messenger_url()
if not messenger or len(messenger) == 0:
raise Exception(ErrFormat.notice_please_config_messenger_first)
url = f"{messenger}/v1/senders"
name = kwargs.get('name')
bot_list = kwargs.pop('bot', None)
for k, v in kwargs.items():
if isinstance(v, bool):
kwargs[k] = 'true' if v else 'false'
else:
kwargs[k] = str(v)
payload = {name: [kwargs]}
current_app.logger.info(f"update_messenger_config: {url}, {payload}")
res = requests.put(url, json=payload, timeout=2)
current_app.logger.info(f"update_messenger_config: {res.status_code}, {res.text}")
if not bot_list or len(bot_list) == 0:
return
bot_name = BotNameMap.get(name)
payload = {bot_name: bot_list}
current_app.logger.info(f"update_messenger_config: {url}, {payload}")
bot_res = requests.put(url, json=payload, timeout=2)
current_app.logger.info(f"update_messenger_config: {bot_res.status_code}, {bot_res.text}")
except Exception as e:
return abort(400, str(e))
@staticmethod
def get_notice_config_by_id(_id):
return NoticeConfig.get_by(first=True, to_dict=False, id=_id) or \
abort(400,
ErrFormat.notice_not_existed.format(_id))
@staticmethod
def get_all():
return NoticeConfig.get_by(to_dict=True)
@staticmethod
def test_send_email(receive_address, **kwargs):
messenger = NoticeConfigCRUD.get_messenger_url()
if not messenger or len(messenger) == 0:
abort(400, ErrFormat.notice_please_config_messenger_first)
url = f"{messenger}/v1/message"
recipient_email = receive_address
subject = 'Test Email'
body = 'This is a test email'
payload = {
"sender": 'email',
"msgtype": "text/plain",
"title": subject,
"content": body,
"tos": [recipient_email],
}
current_app.logger.info(f"test_send_email: {url}, {payload}")
response = requests.post(url, json=payload)
if response.status_code != 200:
abort(400, response.text)
return 1
@staticmethod
def get_app_bot():
result = []
for notice_app in NoticeConfig.get_by(to_dict=False):
if notice_app.platform in ['email']:
continue
info = notice_app.info
name = info.get('name', '')
if name not in BotNameMap:
continue
result.append(dict(
name=info.get('name', ''),
label=info.get('label', ''),
bot=info.get('bot', []),
))
return result
class NoticeConfigForm(Form):
platform = StringField(validators=[
validators.DataRequired(message="平台 不能为空"),
validators.Length(max=255),
])
info = StringField(validators=[
validators.DataRequired(message="信息 不能为空"),
validators.Length(max=255),
])
class NoticeConfigUpdateForm(Form):
info = StringField(validators=[
validators.DataRequired(message="信息 不能为空"),
validators.Length(max=255),
])

View File

@ -0,0 +1,65 @@
# -*- coding:utf-8 -*-
from api.lib.resp_format import CommonErrFormat
class ErrFormat(CommonErrFormat):
company_info_is_already_existed = "公司信息已存在!无法创建"
no_file_part = "没有文件部分"
file_is_required = "文件是必须的"
direct_supervisor_is_not_self = "直属上级不能是自己"
parent_department_is_not_self = "上级部门不能是自己"
employee_list_is_empty = "员工列表为空"
column_name_not_support = "不支持的列名"
password_is_required = "密码不能为空"
employee_acl_rid_is_zero = "员工ACL角色ID不能为0"
generate_excel_failed = "生成excel失败: {}"
rename_columns_failed = "字段转换为中文失败: {}"
cannot_block_this_employee_is_other_direct_supervisor = "该员工是其他员工的直属上级, 不能禁用"
cannot_block_this_employee_is_department_manager = "该员工是部门负责人, 不能禁用"
employee_id_not_found = "员工ID [{}] 不存在"
value_is_required = "值是必须的"
email_already_exists = "邮箱 [{}] 已存在"
query_column_none_keep_value_empty = "查询 {} 空值时请保持value为空"
not_support_operator = "不支持的操作符: {}"
not_support_relation = "不支持的关系: {}"
conditions_field_missing = "conditions内元素字段缺失请检查"
datetime_format_error = "{} 格式错误,应该为:%Y-%m-%d %H:%M:%S"
department_level_relation_error = "部门层级关系不正确"
delete_reserved_department_name = "保留部门,无法删除!"
department_id_is_required = "部门ID是必须的"
department_list_is_required = "部门列表是必须的"
cannot_to_be_parent_department = "{} 不能设置为上级部门"
department_id_not_found = "部门ID [{}] 不存在"
parent_department_id_must_more_than_zero = "上级部门ID必须大于0"
department_name_already_exists = "部门名称 [{}] 已存在"
new_department_is_none = "新部门是空的"
acl_edit_user_failed = "ACL 修改用户失败: {}"
acl_uid_not_found = "ACL 用户UID [{}] 不存在"
acl_add_user_failed = "ACL 添加用户失败: {}"
acl_add_role_failed = "ACL 添加角色失败: {}"
acl_update_role_failed = "ACL 更新角色失败: {}"
acl_get_all_users_failed = "ACL 获取所有用户失败: {}"
acl_remove_user_from_role_failed = "ACL 从角色中移除用户失败: {}"
acl_add_user_to_role_failed = "ACL 添加用户到角色失败: {}"
acl_import_user_failed = "ACL 导入用户[{}]失败: {}"
nickname_is_required = "用户名不能为空"
username_is_required = "username不能为空"
email_is_required = "邮箱不能为空"
email_format_error = "邮箱格式错误"
email_send_timeout = "邮件发送超时"
common_data_not_found = "ID {} 找不到记录"
notice_platform_existed = "{} 已存在"
notice_not_existed = "{} 配置项不存在"
notice_please_config_messenger_first = "请先配置 messenger"
notice_bind_err_with_empty_mobile = "绑定失败,手机号为空"
notice_bind_failed = "绑定失败: {}"
notice_bind_success = "绑定成功"
notice_remove_bind_success = "解绑成功"

View File

@ -0,0 +1,16 @@
import uuid
from api.lib.common_setting.utils import get_cur_time_str
def allowed_file(filename, allowed_extensions):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in allowed_extensions
def generate_new_file_name(name):
ext = name.split('.')[-1]
prev_name = ''.join(name.split(f".{ext}")[:-1])
uid = str(uuid.uuid4())
cur_str = get_cur_time_str('_')
return f"{prev_name}_{cur_str}_{uid}.{ext}"

View File

@ -0,0 +1,25 @@
# -*- coding:utf-8 -*-
from datetime import datetime
def get_cur_time_str(split_flag='-'):
f = f"%Y{split_flag}%m{split_flag}%d{split_flag}%H{split_flag}%M{split_flag}%S{split_flag}%f"
return datetime.now().strftime(f)[:-3]
class BaseEnum(object):
_ALL_ = set()
@classmethod
def is_valid(cls, item):
return item in cls.all()
@classmethod
def all(cls):
if not cls._ALL_:
cls._ALL_ = {
getattr(cls, attr)
for attr in dir(cls)
if not attr.startswith("_") and not callable(getattr(cls, attr))
}
return cls._ALL_

17
cmdb-api/api/lib/const.py Normal file
View File

@ -0,0 +1,17 @@
# -*- coding:utf-8 -*-
from api.lib.utils import BaseEnum
class PermEnum(BaseEnum):
ADD = "create"
UPDATE = "update"
DELETE = "delete"
READ = "read"
EXECUTE = "execute"
GRANT = "grant"
ADMIN = "admin"
class RoleEnum(BaseEnum):
ADMIN = "OneOPS_Application_Admin"

View File

@ -0,0 +1,180 @@
# -*- coding:utf-8 -*-
import datetime
import six
from api.extensions import db
from api.lib.exception import CommitException
class FormatMixin(object):
def to_dict(self):
res = dict([(k, getattr(self, k) if not isinstance(
getattr(self, k), (datetime.datetime, datetime.date, datetime.time)) else str(
getattr(self, k))) for k in getattr(self, "__mapper__").c.keys()])
# FIXME: getattr(cls, "__table__").columns k.name
res.pop('password', None)
res.pop('_password', None)
res.pop('secret', None)
return res
@classmethod
def from_dict(cls, **kwargs):
from sqlalchemy.sql.sqltypes import Time, Date, DateTime
columns = dict(getattr(cls, "__table__").columns)
for k, c in columns.items():
if kwargs.get(k):
if type(c.type) == Time:
kwargs[k] = datetime.datetime.strptime(kwargs[k], "%H:%M:%S").time()
if type(c.type) == Date:
kwargs[k] = datetime.datetime.strptime(kwargs[k], "%Y-%m-%d").date()
if type(c.type) == DateTime:
kwargs[k] = datetime.datetime.strptime(kwargs[k], "%Y-%m-%d %H:%M:%S")
return cls(**kwargs)
@classmethod
def get_columns(cls):
return {k: 1 for k in getattr(cls, "__mapper__").c.keys()}
class CRUDMixin(FormatMixin):
@classmethod
def create(cls, flush=False, commit=True, **kwargs):
return cls(**kwargs).save(flush=flush, commit=commit)
def update(self, flush=False, commit=True, filter_none=True, **kwargs):
kwargs.pop("id", None)
for attr, value in six.iteritems(kwargs):
if (value is not None and filter_none) or not filter_none:
setattr(self, attr, value)
return self.save(flush=flush, commit=commit)
def save(self, commit=True, flush=False):
db.session.add(self)
try:
if flush:
db.session.flush()
elif commit:
db.session.commit()
except Exception as e:
db.session.rollback()
raise CommitException(str(e))
return self
def delete(self, flush=False, commit=True):
db.session.delete(self)
try:
if flush:
return db.session.flush()
elif commit:
return db.session.commit()
except Exception as e:
db.session.rollback()
raise CommitException(str(e))
def soft_delete(self, flush=False, commit=True):
setattr(self, "deleted", True)
setattr(self, "deleted_at", datetime.datetime.now())
self.save(flush=flush, commit=commit)
@classmethod
def get_by_id(cls, _id):
if any((isinstance(_id, six.string_types) and _id.isdigit(),
isinstance(_id, (six.integer_types, float))), ):
obj = getattr(cls, "query").get(int(_id))
if obj and not obj.deleted:
return obj
@classmethod
def get_by(cls, first=False,
to_dict=True,
fl=None,
exclude=None,
deleted=False,
use_master=False,
only_query=False,
**kwargs):
db_session = db.session if not use_master else db.session().using_bind("master")
fl = fl.strip().split(",") if fl and isinstance(fl, six.string_types) else (fl or [])
exclude = exclude.strip().split(",") if exclude and isinstance(exclude, six.string_types) else (exclude or [])
keys = cls.get_columns()
fl = [k for k in fl if k in keys]
fl = [k for k in keys if k not in exclude and not k.isupper()] if exclude else fl
fl = list(filter(lambda x: "." not in x, fl))
if hasattr(cls, "deleted") and deleted is not None:
kwargs["deleted"] = deleted
kwargs_for_func = {i[7:]: kwargs[i] for i in kwargs if i.startswith('__func_')}
kwargs = {i: kwargs[i] for i in kwargs if not i.startswith('__func_')}
if fl:
query = db_session.query(*[getattr(cls, k) for k in fl])
else:
query = db_session.query(cls)
query = query.filter_by(**kwargs)
for i in kwargs_for_func:
func, key = i.split('__key_')
query = query.filter(getattr(getattr(cls, key), func)(kwargs_for_func[i]))
if only_query:
return query
if fl:
result = [{k: getattr(i, k) for k in fl} if to_dict else i for i in query]
else:
result = [i.to_dict() if to_dict else i for i in query]
return result[0] if first and result else (None if first else result)
@classmethod
def get_by_like(cls, to_dict=True, deleted=False, **kwargs):
query = db.session.query(cls)
if hasattr(cls, "deleted") and deleted is not None:
query = query.filter(cls.deleted.is_(deleted))
for k, v in kwargs.items():
query = query.filter(getattr(cls, k).ilike('%{0}%'.format(v)))
return [i.to_dict() if to_dict else i for i in query]
class SoftDeleteMixin(object):
deleted_at = db.Column(db.DateTime)
deleted = db.Column(db.Boolean, index=True, default=False)
class TimestampMixin(object):
created_at = db.Column(db.DateTime, default=lambda: datetime.datetime.now())
updated_at = db.Column(db.DateTime, onupdate=lambda: datetime.datetime.now())
class TimestampMixin2(object):
created_at = db.Column(db.DateTime, default=lambda: datetime.datetime.now(), index=True)
class SurrogatePK(object):
__table_args__ = {"extend_existing": True}
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
class Model(SoftDeleteMixin, TimestampMixin, CRUDMixin, db.Model, SurrogatePK):
__abstract__ = True
class CRUDModel(db.Model, CRUDMixin):
__abstract__ = True
class Model2(TimestampMixin2, db.Model, CRUDMixin, SurrogatePK):
__abstract__ = True

View File

@ -0,0 +1,72 @@
# -*- coding:utf-8 -*-
from functools import wraps
from flask import abort
from flask import request
from api.lib.resp_format import CommonErrFormat
def kwargs_required(*required_args):
def decorate(func):
@wraps(func)
def wrapper(*args, **kwargs):
for arg in required_args:
if arg not in kwargs:
return abort(400, CommonErrFormat.argument_required.format(arg))
return func(*args, **kwargs)
return wrapper
return decorate
def args_required(*required_args, **value_required):
def decorate(func):
@wraps(func)
def wrapper(*args, **kwargs):
for arg in required_args:
if arg not in request.values:
return abort(400, CommonErrFormat.argument_required.format(arg))
if value_required.get('value_required', True) and not request.values.get(arg):
return abort(400, CommonErrFormat.argument_value_required.format(arg))
return func(*args, **kwargs)
return wrapper
return decorate
def args_validate(model_cls, exclude_args=None):
def decorate(func):
@wraps(func)
def wrapper(*args, **kwargs):
for arg in request.values:
if hasattr(model_cls, arg):
attr = getattr(model_cls, arg)
if not hasattr(attr, "type"):
continue
if exclude_args and arg in exclude_args:
continue
if attr.type.python_type == str and attr.type.length and (
len(request.values[arg] or '') > attr.type.length):
return abort(400, CommonErrFormat.argument_str_length_limit.format(arg, attr.type.length))
elif attr.type.python_type in (int, float) and request.values[arg]:
try:
int(float(request.values[arg]))
except (TypeError, ValueError):
return abort(400, CommonErrFormat.argument_invalid.format(arg))
return func(*args, **kwargs)
return wrapper
return decorate

View File

@ -0,0 +1,11 @@
# -*- coding:utf-8 -*-
from werkzeug.exceptions import NotFound, Forbidden, BadRequest
class CommitException(Exception):
pass
AbortException = (NotFound, Forbidden, BadRequest)

View File

@ -0,0 +1,49 @@
# -*- coding:utf-8 -*-
import hashlib
import requests
from flask import abort
from flask import current_app
from flask_login import current_user
from future.moves.urllib.parse import urlparse
def build_api_key(path, params):
current_user is not None or abort(403, u"您得登陆才能进行该操作")
key = current_user.key
secret = current_user.secret
values = "".join([str(params[k]) for k in sorted(params.keys())
if params[k] is not None]) if params.keys() else ""
_secret = "".join([path, secret, values]).encode("utf-8")
params["_secret"] = hashlib.sha1(_secret).hexdigest()
params["_key"] = key
return params
def api_request(url, method="get", params=None, ret_key=None):
params = params or {}
resp = None
try:
method = method.lower()
params = build_api_key(urlparse(url).path, params)
if method == "get":
resp = getattr(requests, method)(url, params=params)
else:
resp = getattr(requests, method)(url, data=params)
if resp.status_code != 200:
return abort(resp.status_code, resp.json().get("message"))
resp = resp.json()
if ret_key is not None:
return resp.get(ret_key)
return resp
except Exception as e:
code = e.code if hasattr(e, "code") else None
if isinstance(code, int) and resp is not None:
return abort(code, resp.json().get("message"))
current_app.logger.warning(url)
current_app.logger.warning(params)
current_app.logger.error(str(e))
return abort(500, "server unknown error")

54
cmdb-api/api/lib/mail.py Normal file
View File

@ -0,0 +1,54 @@
# -*- coding:utf-8 -*-
import smtplib
import time
from email.header import Header
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.utils import make_msgid
from flask import current_app
def send_mail(sender, receiver, subject, content, ctype="html", pics=()):
"""subject and body are unicode objects"""
if not receiver:
return
if not sender:
sender = current_app.config.get("DEFAULT_MAIL_SENDER")
smtpserver = current_app.config.get("MAIL_SERVER")
if ctype == "html":
msg = MIMEText(content, 'html', 'utf-8')
else:
msg = MIMEText(content, 'plain', 'utf-8')
if len(pics) != 0:
msgRoot = MIMEMultipart('related')
msgText = MIMEText(content, 'html', 'utf-8')
msgRoot.attach(msgText)
i = 1
for pic in pics:
fp = open(pic, "rb")
image = MIMEImage(fp.read())
fp.close()
image.add_header('Content-ID', '<img%02d>' % i)
msgRoot.attach(image)
i += 1
msg = msgRoot
msg['Subject'] = Header(subject, 'utf-8')
msg['From'] = sender
msg['To'] = ';'.join(receiver)
msg['Message-ID'] = make_msgid()
msg['date'] = time.strftime('%a, %d %b %Y %H:%M:%S %z')
if current_app.config.get("MAIL_USE_SSL") or current_app.config.get("MAIL_USE_TLS"):
smtp = smtplib.SMTP_SSL(smtpserver)
else:
smtp = smtplib.SMTP()
smtp.connect(smtpserver, 25)
if current_app.config.get("MAIL_PASSWORD") != "":
smtp.login(current_app.config.get("MAIL_USERNAME"), current_app.config.get("MAIL_PASSWORD"))
smtp.sendmail(sender, receiver, msg.as_string())
smtp.quit()

96
cmdb-api/api/lib/mixin.py Normal file
View File

@ -0,0 +1,96 @@
# -*- coding:utf-8 -*-
from flask import abort
from sqlalchemy import func
from api.extensions import db
from api.lib.utils import get_page
from api.lib.utils import get_page_size
class DBMixin(object):
cls = None
@classmethod
def search(cls, page, page_size, fl=None, only_query=False, reverse=False, count_query=False, **kwargs):
page = get_page(page)
page_size = get_page_size(page_size)
if fl is None:
query = db.session.query(cls.cls).filter(cls.cls.deleted.is_(False))
else:
query = db.session.query(*[getattr(cls.cls, i) for i in fl]).filter(cls.cls.deleted.is_(False))
_query = None
if count_query:
_query = db.session.query(func.count(cls.cls.id)).filter(cls.cls.deleted.is_(False))
for k in kwargs:
if hasattr(cls.cls, k):
query = query.filter(getattr(cls.cls, k) == kwargs[k])
if count_query:
_query = _query.filter(getattr(cls.cls, k) == kwargs[k])
if reverse:
query = query.order_by(cls.cls.id.desc())
if only_query and not count_query:
return query
elif only_query:
return _query, query
numfound = query.count()
return numfound, [i.to_dict() if fl is None else getattr(i, '_asdict')()
for i in query.offset((page - 1) * page_size).limit(page_size)]
def _must_be_required(self, _id):
existed = self.cls.get_by_id(_id)
existed or abort(404, "Factor [{}] does not exist".format(_id))
return existed
def _can_add(self, **kwargs):
raise NotImplementedError
def _after_add(self, obj, **kwargs):
pass
def _can_update(self, **kwargs):
raise NotImplementedError
def _after_update(self, obj, **kwargs):
pass
def _can_delete(self, **kwargs):
raise NotImplementedError
def _after_delete(self, obj):
pass
def add(self, **kwargs):
kwargs = self._can_add(**kwargs) or kwargs
obj = self.cls.create(**kwargs)
kwargs['_id'] = obj.id if hasattr(obj, 'id') else None
self._after_add(obj, **kwargs)
return obj
def update(self, _id, **kwargs):
inst = self._can_update(_id=_id, **kwargs)
obj = inst.update(_id=_id, **kwargs)
self._after_update(obj, **kwargs)
return obj
def delete(self, _id):
inst = self._can_delete(_id=_id)
inst.soft_delete()
self._after_delete(inst)
return inst

View File

@ -0,0 +1,72 @@
# -*- coding:utf-8 -*-
import json
import requests
import six
from flask import current_app
from jinja2 import Template
from markdownify import markdownify as md
from api.lib.common_setting.notice_config import NoticeConfigCRUD
from api.lib.mail import send_mail
def _request_messenger(subject, body, tos, sender, payload):
params = dict(sender=sender, title=subject,
tos=[to[sender] for to in tos if to.get(sender)])
if not params['tos']:
raise Exception("no receivers")
flat_tos = []
for i in params['tos']:
if i.strip():
to = Template(i).render(payload)
if isinstance(to, list):
flat_tos.extend(to)
elif isinstance(to, six.string_types):
flat_tos.append(to)
params['tos'] = flat_tos
if sender == "email":
params['msgtype'] = 'text/html'
params['content'] = body
else:
params['msgtype'] = 'markdown'
try:
content = md("{}\n{}".format(subject or '', body or ''))
except Exception as e:
current_app.logger.warning("html2markdown failed: {}".format(e))
content = "{}\n{}".format(subject or '', body or '')
params['content'] = json.dumps(dict(content=content))
url = current_app.config.get('MESSENGER_URL') or NoticeConfigCRUD.get_messenger_url()
if not url:
raise Exception("no messenger url")
if not url.endswith("message"):
url = "{}/v1/message".format(url)
resp = requests.post(url, json=params)
if resp.status_code != 200:
raise Exception(resp.text)
return resp.text
def notify_send(subject, body, methods, tos, payload=None):
payload = payload or {}
payload = {k: v or '' for k, v in payload.items()}
subject = Template(subject).render(payload)
body = Template(body).render(payload)
res = ''
for method in methods:
if method == "email" and not current_app.config.get('USE_MESSENGER', True):
send_mail(None, [Template(to.get('email')).render(payload) for to in tos], subject, body)
res += (_request_messenger(subject, body, tos, method, payload) + "\n")
return res

View File

@ -0,0 +1 @@
# -*- coding:utf-8 -*-

View File

@ -0,0 +1,25 @@
# -*- coding:utf-8 -*-
from functools import wraps
from flask import abort
from flask import request
from api.lib.perm.acl.cache import AppCache, AppAccessTokenCache
from api.lib.perm.acl.resp_format import ErrFormat
def validate_app(func):
@wraps(func)
def wrapper(*args, **kwargs):
if not request.headers.get('App-Access-Token', '').strip():
app_id = request.values.get('app_id')
app = AppCache.get(app_id)
if app is None:
return abort(400, ErrFormat.app_not_found.format("id={}".format(app_id)))
request.values['app_id'] = app.id
return func(*args, **kwargs)
return wrapper

View File

@ -0,0 +1,328 @@
# -*- coding:utf-8 -*-
import functools
import hashlib
import requests
import six
from flask import abort
from flask import current_app
from flask import request
from flask import session
from flask_login import current_user
from api.extensions import cache
from api.lib.perm.acl.audit import AuditCRUD
from api.lib.perm.acl.cache import AppCache
from api.lib.perm.acl.cache import RoleCache
from api.lib.perm.acl.cache import UserCache
from api.lib.perm.acl.permission import PermissionCRUD
from api.lib.perm.acl.resource import ResourceCRUD
from api.lib.perm.acl.resp_format import ErrFormat
from api.lib.perm.acl.role import RoleCRUD
from api.lib.perm.acl.role import RoleRelationCRUD
from api.models.acl import App
from api.models.acl import Resource
from api.models.acl import ResourceGroup
from api.models.acl import ResourceType
from api.models.acl import Role
def get_access_token():
url = "{0}/acl/apps/token".format(current_app.config.get('ACL_URI'))
payload = dict(app_id=current_app.config.get('APP_ID'),
secret_key=hashlib.md5(current_app.config.get('APP_SECRET_KEY').encode('utf-8')).hexdigest())
try:
res = requests.post(url, data=payload).json()
return res.get("token")
except Exception as e:
current_app.logger.error(str(e))
class AccessTokenCache(object):
TOKEN_KEY = 'TICKET::AccessToken'
@classmethod
def get(cls):
if cache.get(cls.TOKEN_KEY) is not None and cache.get(cls.TOKEN_KEY) != "":
return cache.get(cls.TOKEN_KEY)
res = get_access_token() or ""
cache.set(cls.TOKEN_KEY, res, timeout=60 * 60)
return res
@classmethod
def clean(cls):
cache.clear(cls.TOKEN_KEY)
class ACLManager(object):
def __init__(self, app=None):
self.app = AppCache.get(app or 'cmdb')
if not self.app:
raise Exception(ErrFormat.app_not_found.format(app))
self.app_id = self.app.id
def _get_resource(self, name, resource_type_name):
resource_type = ResourceType.get_by(name=resource_type_name, first=True, to_dict=False)
resource_type or abort(404, ErrFormat.resource_type_not_found.format(resource_type_name))
return Resource.get_by(resource_type_id=resource_type.id,
app_id=self.app_id,
name=name,
first=True,
to_dict=False)
def _get_resource_group(self, name):
return ResourceGroup.get_by(
app_id=self.app_id,
name=name,
first=True,
to_dict=False
)
def _get_role(self, name):
user = UserCache.get(name)
if user:
return Role.get_by(name=name, uid=user.uid, first=True, to_dict=False)
return (Role.get_by(name=name, app_id=self.app_id, first=True, to_dict=False) or
Role.get_by(name=name, first=True, to_dict=False))
def add_resource(self, name, resource_type_name=None):
resource_type = ResourceType.get_by(name=resource_type_name, first=True, to_dict=False)
resource_type or abort(404, ErrFormat.resource_type_not_found.format(resource_type_name))
uid = AuditCRUD.get_current_operate_uid()
ResourceCRUD.add(name, resource_type.id, self.app_id, uid)
def update_resource(self, name, new_name, resource_type_name=None):
resource = self._get_resource(name, resource_type_name)
if resource is None:
self.add_resource(new_name, resource_type_name)
else:
ResourceCRUD.update(resource.id, new_name)
def grant_resource_to_role(self, name, role, resource_type_name=None, permissions=None):
resource = self._get_resource(name, resource_type_name)
role = self._get_role(role)
if resource:
PermissionCRUD.grant(role.id, permissions, resource_id=resource.id)
else:
group = self._get_resource_group(name)
if group:
PermissionCRUD.grant(role.id, permissions, group_id=group.id)
def grant_resource_to_role_by_rid(self, name, rid, resource_type_name=None, permissions=None):
resource = self._get_resource(name, resource_type_name)
if resource:
PermissionCRUD.grant(rid, permissions, resource_id=resource.id)
else:
group = self._get_resource_group(name)
if group:
PermissionCRUD.grant(rid, permissions, group_id=group.id)
def revoke_resource_from_role(self, name, role, resource_type_name=None, permissions=None):
resource = self._get_resource(name, resource_type_name)
role = self._get_role(role)
if resource:
PermissionCRUD.revoke(role.id, permissions, resource_id=resource.id)
else:
group = self._get_resource_group(name)
if group:
PermissionCRUD.revoke(role.id, permissions, group_id=group.id)
def revoke_resource_from_role_by_rid(self, name, rid, resource_type_name=None, permissions=None):
resource = self._get_resource(name, resource_type_name)
if resource:
PermissionCRUD.revoke(rid, permissions, resource_id=resource.id)
else:
group = self._get_resource_group(name)
if group:
PermissionCRUD.revoke(rid, permissions, group_id=group.id)
def del_resource(self, name, resource_type_name=None):
resource = self._get_resource(name, resource_type_name)
if resource:
ResourceCRUD.delete(resource.id)
def has_permission(self, resource_name, resource_type, perm, resource_id=None):
if is_app_admin(self.app_id):
return True
role = self._get_role(current_user.username)
role or abort(404, ErrFormat.role_not_found.format(current_user.username))
return RoleCRUD.has_permission(role.id, resource_name, resource_type, self.app_id, perm,
resource_id=resource_id)
@staticmethod
def get_user_info(username, app_id=None):
user = UserCache.get(username)
if not user:
user = RoleCache.get_by_name(app_id, username) or RoleCache.get_by_name(None, username) # FIXME
if not user:
return abort(404, ErrFormat.user_not_found.format(username))
user = user.to_dict()
role = Role.get_by(uid=user['uid'], first=True, to_dict=False) if user.get('uid') else None
if role is not None:
user["rid"] = role.id
if app_id is None:
parent_ids = []
apps = App.get_by(to_dict=False)
for app in apps:
parent_ids.extend(RoleRelationCRUD.recursive_parent_ids(role.id, app.id))
else:
parent_ids = RoleRelationCRUD.recursive_parent_ids(role.id, app_id)
user['parents'] = [RoleCache.get(rid).name for rid in set(parent_ids) if RoleCache.get(rid)]
else:
user['parents'] = []
user['rid'] = user['id'] if user.get('id') else None
if user['rid']:
parent_ids = RoleRelationCRUD.recursive_parent_ids(user['rid'], app_id)
user['parents'] = [RoleCache.get(rid).name for rid in set(parent_ids) if RoleCache.get(rid)]
return user
def get_resources(self, resource_type_name=None):
role = self._get_role(current_user.username)
role or abort(404, ErrFormat.role_not_found.format(current_user.username))
rid = role.id
return RoleCRUD.recursive_resources(rid, self.app_id, resource_type_name).get('resources')
@staticmethod
def authenticate_with_token(token):
url = "{0}/acl/auth_with_token".format(current_app.config.get('ACL_URI'))
try:
return requests.post(url, json={"token": token},
headers={'App-Access-Token': AccessTokenCache.get()}).json()
except:
return {}
def validate_permission(resources, resource_type, perm, app=None):
if not resources:
return
if current_app.config.get("USE_ACL"):
if current_user.username == "worker":
return
resources = [resources] if isinstance(resources, six.string_types) else resources
for resource in resources:
if not ACLManager(app).has_permission(resource, resource_type, perm):
return abort(403, ErrFormat.resource_no_permission.format(resource, perm))
def has_perm(resources, resource_type, perm, app=None):
def decorator_has_perm(func):
@functools.wraps(func)
def wrapper_has_perm(*args, **kwargs):
if not resources:
return
if current_app.config.get("USE_ACL"):
if is_app_admin(app):
return func(*args, **kwargs)
validate_permission(resources, resource_type, perm, app)
return func(*args, **kwargs)
return wrapper_has_perm
return decorator_has_perm
def is_app_admin(app=None):
app = app or 'cmdb'
app = AppCache.get(app)
if app is None:
return False
app_id = app.id
if 'acl_admin' in session.get("acl", {}).get("parentRoles", []):
return True
for role_name in session.get("acl", {}).get("parentRoles", []):
role = RoleCache.get_by_name(app_id, role_name)
if role and role.is_app_admin:
return True
return False
def is_admin():
if 'acl_admin' in session.get("acl", {}).get("parentRoles", []):
return True
return False
def admin_required(app=None):
def decorator_admin_required(func):
@functools.wraps(func)
def wrapper_admin_required(*args, **kwargs):
if is_app_admin(app):
return func(*args, **kwargs)
return abort(403, ErrFormat.admin_required)
return wrapper_admin_required
return decorator_admin_required
def has_perm_from_args(arg_name, resource_type, perm, callback=None, app=None):
def decorator_has_perm(func):
@functools.wraps(func)
def wrapper_has_perm(*args, **kwargs):
if not arg_name:
return
resource = request.view_args.get(arg_name) or request.values.get(arg_name)
if callback is not None and resource:
resource = callback(resource)
if current_app.config.get("USE_ACL") and resource:
if is_app_admin(app):
return func(*args, **kwargs)
validate_permission(resource, resource_type, perm, app)
return func(*args, **kwargs)
return wrapper_has_perm
return decorator_has_perm
def role_required(role_name, app=None):
def decorator_role_required(func):
@functools.wraps(func)
def wrapper_role_required(*args, **kwargs):
if not role_name:
return
if current_app.config.get("USE_ACL"):
if getattr(current_user, 'username', None) == "worker":
return func(*args, **kwargs)
if role_name not in session.get("acl", {}).get("parentRoles", []) and not is_app_admin(app):
return abort(403, ErrFormat.role_required.format(role_name))
return func(*args, **kwargs)
return wrapper_role_required
return decorator_role_required

View File

@ -0,0 +1,93 @@
# -*- coding:utf-8 -*-
import datetime
import hashlib
import jwt
from flask import abort
from flask import current_app
from api.extensions import db
from api.lib.perm.acl.audit import AuditCRUD
from api.lib.perm.acl.audit import AuditOperateType
from api.lib.perm.acl.audit import AuditScope
from api.lib.perm.acl.resp_format import ErrFormat
from api.models.acl import App
class AppCRUD(object):
cls = App
@staticmethod
def get_all():
return App.get_by(to_dict=False)
@staticmethod
def get(app_id):
return App.get_by_id(app_id)
@staticmethod
def search(q, page=1, page_size=None):
query = db.session.query(App).filter(App.deleted.is_(False))
if q:
query = query.filter(App.name.ilike('%{0}%'.format(q)))
numfound = query.count()
res = query.offset((page - 1) * page_size).limit(page_size)
return numfound, res
@classmethod
def add(cls, name, description):
App.get_by(name=name) and abort(400, ErrFormat.app_is_ready_existed.format(name))
from api.lib.perm.acl.user import UserCRUD
app_id, secret_key = UserCRUD.gen_key_secret()
app = App.create(name=name, description=description, app_id=app_id, secret_key=secret_key)
AuditCRUD.add_resource_log(app.id, AuditOperateType.create, AuditScope.app, app.id, {}, app.to_dict(), {})
return app
@classmethod
def update(cls, _id, **kwargs):
kwargs.pop('id', None)
existed = App.get_by_id(_id) or abort(404, ErrFormat.app_not_found.format("id={}".format(_id)))
origin = existed.to_dict()
existed = existed.update(**kwargs)
AuditCRUD.add_resource_log(existed.id, AuditOperateType.update,
AuditScope.app, existed.id, origin, existed.to_dict(), {})
return existed
@classmethod
def delete(cls, _id):
app = App.get_by_id(_id) or abort(404, ErrFormat.app_not_found.format("id={}".format(_id)))
origin = app.to_dict()
app.soft_delete()
AuditCRUD.add_resource_log(app.id, AuditOperateType.delete,
AuditScope.app, app.id, origin, {}, {})
@staticmethod
def _get_by_key(key):
return App.get_by(app_id=key, first=True, to_dict=False)
@classmethod
def gen_token(cls, key, secret):
app = cls._get_by_key(key) or abort(404, ErrFormat.app_not_found.format("key={}".format(key)))
secret != hashlib.md5(app.secret_key.encode('utf-8')).hexdigest() and abort(403, ErrFormat.app_secret_invalid)
token = jwt.encode({
'sub': app.name,
'iat': datetime.datetime.now(),
'exp': datetime.datetime.now() + datetime.timedelta(minutes=2 * 60)},
current_app.config['SECRET_KEY'])
try:
return token.decode()
except AttributeError:
return token

View File

@ -0,0 +1,350 @@
# -*- coding:utf-8 -*-
import itertools
import json
from enum import Enum
from typing import List
from flask import has_request_context, request
from flask_login import current_user
from sqlalchemy import func
from api.lib.perm.acl import AppCache
from api.models.acl import AuditPermissionLog
from api.models.acl import AuditResourceLog
from api.models.acl import AuditRoleLog
from api.models.acl import AuditTriggerLog
from api.models.acl import Permission
from api.models.acl import Resource
from api.models.acl import ResourceGroup
from api.models.acl import ResourceType
from api.models.acl import Role
from api.models.acl import RolePermission
class AuditScope(str, Enum):
app = 'app'
resource = 'resource'
resource_type = 'resource_type'
resource_group = 'resource_group'
user = 'user'
role = 'role'
role_relation = 'role_relation'
class AuditOperateType(str, Enum):
read = 'read'
create = 'create'
update = 'update'
delete = 'delete'
user_login = 'user_login'
role_relation_add = 'role_relation_add'
role_relation_delete = 'role_relation_delete'
grant = 'grant'
revoke = 'revoke'
trigger_apply = 'trigger_apply'
trigger_cancel = 'trigger_cancel'
class AuditOperateSource(str, Enum):
api = 'api'
acl = 'acl'
trigger = 'trigger'
class AuditCRUD(object):
@staticmethod
def get_current_operate_uid(uid=None):
user_id = uid or (getattr(current_user, 'uid', None)) or getattr(current_user, 'user_id', None)
if has_request_context() and request.headers.get('X-User-Id'):
_user_id = request.headers['X-User-Id']
user_id = int(_user_id) if _user_id.isdigit() else uid
return user_id
@staticmethod
def get_operate_source(source):
if has_request_context() and request.headers.get('App-Access-Token'):
source = AuditOperateSource.api
return source
@staticmethod
def search_permission(app_id, q=None, page=1, page_size=10, start=None, end=None):
criterion = []
if app_id:
app = AppCache.get(app_id)
criterion.append(AuditPermissionLog.app_id == app.id)
if start:
criterion.append(AuditPermissionLog.created_at >= start)
if end:
criterion.append(AuditPermissionLog.created_at <= end)
kwargs = {expr.split(':')[0]: expr.split(':')[1] for expr in q.split(',')} if q else {}
for k, v in kwargs.items():
if k == 'resource_type_id':
criterion.append(AuditPermissionLog.resource_type_id == int(v))
elif k == 'rid':
criterion.append(AuditPermissionLog.rid == int(v))
elif k == 'resource_id':
criterion.append(func.json_contains(AuditPermissionLog.resource_ids, v) == 1)
elif k == 'operate_uid':
criterion.append(AuditPermissionLog.operate_uid == v)
elif k == 'operate_type':
criterion.append(AuditPermissionLog.operate_type == v)
records = AuditPermissionLog.query.filter(
AuditPermissionLog.deleted == 0, *criterion).order_by(
AuditPermissionLog.id.desc()).offset((page - 1) * page_size).limit(page_size).all()
data = {
'data': [r.to_dict() for r in records],
'id2resources': {},
'id2roles': {},
'id2groups': {},
'id2perms': {},
'id2resource_types': {},
}
resource_ids = set(itertools.chain(*[r.resource_ids for r in records]))
group_ids = set(itertools.chain(*[r.group_ids for r in records]))
permission_ids = set(itertools.chain(*[r.permission_ids for r in records]))
resource_type_ids = {r.resource_type_id for r in records}
rids = {r.rid for r in records}
if rids:
roles = Role.query.filter(Role.id.in_(rids)).all()
data['id2roles'] = {r.id: r.to_dict() for r in roles}
if resource_type_ids:
resource_types = ResourceType.query.filter(ResourceType.id.in_(resource_type_ids)).all()
data['id2resource_types'] = {r.id: r.to_dict() for r in resource_types}
if resource_ids:
resources = Resource.query.filter(Resource.id.in_(resource_ids)).all()
data['id2resources'] = {r.id: r.to_dict() for r in resources}
if group_ids:
groups = ResourceGroup.query.filter(ResourceGroup.id.in_(group_ids)).all()
data['id2groups'] = {_g.id: _g.to_dict() for _g in groups}
if permission_ids:
perms = Permission.query.filter(Permission.id.in_(permission_ids)).all()
data['id2perms'] = {_p.id: _p.to_dict() for _p in perms}
return data
@staticmethod
def search_role(app_id, q=None, page=1, page_size=10, start=None, end=None):
criterion = []
if app_id:
app = AppCache.get(app_id)
criterion.append(AuditRoleLog.app_id == app.id)
if start:
criterion.append(AuditRoleLog.created_at >= start)
if end:
criterion.append(AuditRoleLog.created_at <= end)
kwargs = {expr.split(':')[0]: expr.split(':')[1] for expr in q.split(',')} if q else {}
for k, v in kwargs.items():
if k == 'scope':
criterion.append(AuditRoleLog.scope == v)
elif k == 'link_id':
criterion.append(AuditRoleLog.link_id == int(v))
elif k == 'operate_uid':
criterion.append(AuditRoleLog.operate_uid == v)
elif k == 'operate_type':
criterion.append(AuditRoleLog.operate_type == v)
records = AuditRoleLog.query.filter(AuditRoleLog.deleted == 0, *criterion).order_by(
AuditRoleLog.id.desc()).offset((page - 1) * page_size).limit(page_size).all()
data = {
'data': [r.to_dict() for r in records],
'id2roles': {}
}
role_permissions = list(itertools.chain(*[r.extra.get('role_permissions', []) for r in records]))
_rids = set()
if role_permissions:
resource_ids = set([r['resource_id'] for r in role_permissions])
group_ids = set([r['group_id'] for r in role_permissions])
perm_ids = set([r['perm_id'] for r in role_permissions])
_rids.update(set([r['rid'] for r in role_permissions]))
if resource_ids:
resources = Resource.query.filter(Resource.id.in_(resource_ids)).all()
data['id2resources'] = {r.id: r.to_dict() for r in resources}
if group_ids:
groups = ResourceGroup.query.filter(ResourceGroup.id.in_(group_ids)).all()
data['id2groups'] = {_g.id: _g.to_dict() for _g in groups}
if perm_ids:
perms = Permission.query.filter(Permission.id.in_(perm_ids)).all()
data['id2perms'] = {_p.id: _p.to_dict() for _p in perms}
rids = set(itertools.chain(*[r.extra.get('child_ids', []) + r.extra.get('parent_ids', [])
for r in records]))
rids.update(_rids)
if rids:
roles = Role.query.filter(Role.id.in_(rids)).all()
data['id2roles'].update({r.id: r.to_dict() for r in roles})
return data
@staticmethod
def search_resource(app_id, q=None, page=1, page_size=10, start=None, end=None):
criterion = []
if app_id:
app = AppCache.get(app_id)
criterion.append(AuditResourceLog.app_id == app.id)
if start:
criterion.append(AuditResourceLog.created_at >= start)
if end:
criterion.append(AuditResourceLog.created_at <= end)
kwargs = {expr.split(':')[0]: expr.split(':')[1] for expr in q.split(',')} if q else {}
for k, v in kwargs.items():
if k == 'scope':
criterion.append(AuditResourceLog.scope == v)
elif k == 'link_id':
criterion.append(AuditResourceLog.link_id == int(v))
elif k == 'operate_uid':
criterion.append(AuditResourceLog.operate_uid == v)
elif k == 'operate_type':
criterion.append(AuditResourceLog.operate_type == v)
records = AuditResourceLog.query.filter(
AuditResourceLog.deleted == 0, *criterion).order_by(
AuditResourceLog.id.desc()).offset((page - 1) * page_size).limit(page_size).all()
data = {
'data': [r.to_dict() for r in records],
}
return data
@staticmethod
def search_trigger(app_id, q=None, page=1, page_size=10, start=None, end=None):
criterion = []
if app_id:
app = AppCache.get(app_id)
criterion.append(AuditTriggerLog.app_id == app.id)
if start:
criterion.append(AuditTriggerLog.created_at >= start)
if end:
criterion.append(AuditTriggerLog.created_at <= end)
kwargs = {expr.split(':')[0]: expr.split(':')[1] for expr in q.split(',')} if q else {}
for k, v in kwargs.items():
if k == 'trigger_id':
criterion.append(AuditTriggerLog.trigger_id == int(v))
elif k == 'operate_uid':
criterion.append(AuditTriggerLog.operate_uid == v)
elif k == 'operate_type':
criterion.append(AuditTriggerLog.operate_type == v)
records = AuditTriggerLog.query.filter(
AuditTriggerLog.deleted == 0, *criterion).order_by(
AuditTriggerLog.id.desc()).offset((page - 1) * page_size).limit(page_size).all()
data = {
'data': [r.to_dict() for r in records],
'id2roles': {},
'id2resource_types': {},
}
rids = set(itertools.chain(*[json.loads(r.origin.get('roles', "[]")) +
json.loads(r.current.get('roles', "[]"))
for r in records]))
resource_type_ids = set([r.origin.get('resource_type_id') for r in records
if r.origin.get('resource_type_id')] +
[r.current.get('resource_type_id') for r in records
if r.current.get('resource_type_id')])
if rids:
roles = Role.query.filter(Role.id.in_(rids)).all()
data['id2roles'] = {r.id: r.to_dict() for r in roles}
if resource_type_ids:
resource_types = ResourceType.query.filter(ResourceType.id.in_(resource_type_ids)).all()
data['id2resource_types'] = {r.id: r.to_dict() for r in resource_types}
return data
@classmethod
def add_role_log(cls, app_id, operate_type: AuditOperateType,
scope: AuditScope, link_id: int, origin: dict, current: dict, extra: dict,
uid=None, source=AuditOperateSource.acl):
user_id = cls.get_current_operate_uid(uid)
AuditRoleLog.create(app_id=app_id, operate_uid=user_id, operate_type=operate_type.value,
scope=scope.value,
link_id=link_id,
origin=origin,
current=current,
extra=extra,
source=source.value)
@classmethod
def add_resource_log(cls, app_id, operate_type: AuditOperateType,
scope: AuditScope, link_id: int, origin: dict, current: dict, extra: dict,
uid=None, source=AuditOperateSource.acl):
user_id = cls.get_current_operate_uid(uid)
source = cls.get_operate_source(source)
AuditResourceLog.create(app_id=app_id, operate_uid=user_id, operate_type=operate_type.value,
scope=scope.value,
link_id=link_id,
origin=origin,
current=current,
extra=extra,
source=source.value)
@classmethod
def add_permission_log(cls, app_id, operate_type: AuditOperateType,
rid: int, rt_id: int, role_permissions: List[RolePermission],
uid=None, source=AuditOperateSource.acl):
if not role_permissions:
return
user_id = cls.get_current_operate_uid(uid)
source = cls.get_operate_source(source)
resource_ids = list({r.resource_id for r in role_permissions if r.resource_id})
permission_ids = list({r.perm_id for r in role_permissions if r.perm_id})
group_ids = list({r.group_id for r in role_permissions if r.group_id})
AuditPermissionLog.create(app_id=app_id, operate_uid=user_id,
operate_type=operate_type.value,
rid=rid,
resource_type_id=rt_id,
resource_ids=resource_ids,
permission_ids=permission_ids,
group_ids=group_ids,
source=source.value)
@classmethod
def add_trigger_log(cls, app_id, trigger_id, operate_type: AuditOperateType,
origin: dict, current: dict, extra: dict,
uid=None, source=AuditOperateSource.acl):
user_id = cls.get_current_operate_uid(uid)
source = cls.get_operate_source(source)
AuditTriggerLog.create(app_id=app_id, trigger_id=trigger_id, operate_uid=user_id,
operate_type=operate_type.value,
origin=origin, current=current, extra=extra, source=source.value)

View File

@ -0,0 +1,322 @@
# -*- coding:utf-8 -*-
import msgpack
from api.extensions import cache
from api.extensions import db
from api.lib.utils import Lock
from api.models.acl import App
from api.models.acl import Permission
from api.models.acl import Resource
from api.models.acl import ResourceGroup
from api.models.acl import Role
from api.models.acl import User
class AppAccessTokenCache(object):
PREFIX = "AppAccessTokenCache::token::{}"
@classmethod
def get_app_id(cls, token):
app_id = cache.get(cls.PREFIX.format(token))
return app_id
@classmethod
def set(cls, token, app, timeout=7200):
cache.set(token, cls.PREFIX.format(app.app_id), timeout=timeout)
class AppCache(object):
PREFIX_ID = "App::id::{0}"
PREFIX_NAME = "App::name::{0}"
@classmethod
def get(cls, key):
app = cache.get(cls.PREFIX_ID.format(key)) or cache.get(cls.PREFIX_NAME.format(key))
if app is None:
app = App.get_by_id(key) or App.get_by(name=key, to_dict=False, first=True)
if app is not None:
cls.set(app)
return app
@classmethod
def set(cls, app):
cache.set(cls.PREFIX_ID.format(app.id), app)
cache.set(cls.PREFIX_NAME.format(app.name), app)
@classmethod
def clean(cls, app):
cache.delete(cls.PREFIX_ID.format(app.id))
cache.delete(cls.PREFIX_NAME.format(app.name))
class UserCache(object):
PREFIX_ID = "User::uid::{0}"
PREFIX_NAME = "User::username::{0}"
PREFIX_NICK = "User::nickname::{0}"
PREFIX_WXID = "User::wxid::{0}"
@classmethod
def get(cls, key):
user = (cache.get(cls.PREFIX_ID.format(key)) or
cache.get(cls.PREFIX_NAME.format(key)) or
cache.get(cls.PREFIX_NICK.format(key)) or
cache.get(cls.PREFIX_WXID.format(key)))
if not user:
user = (User.query.get(key) or
User.query.get_by_username(key) or
User.query.get_by_nickname(key) or
User.query.get_by_wxid(key))
if user:
cls.set(user)
return user
@classmethod
def set(cls, user):
cache.set(cls.PREFIX_ID.format(user.uid), user)
cache.set(cls.PREFIX_NAME.format(user.username), user)
cache.set(cls.PREFIX_NICK.format(user.nickname), user)
if user.wx_id:
cache.set(cls.PREFIX_WXID.format(user.wx_id), user)
@classmethod
def clean(cls, user):
cache.delete(cls.PREFIX_ID.format(user.uid))
cache.delete(cls.PREFIX_NAME.format(user.username))
cache.delete(cls.PREFIX_NICK.format(user.nickname))
if user.wx_id:
cache.delete(cls.PREFIX_WXID.format(user.wx_id))
class RoleCache(object):
PREFIX_ID = "Role::id::{0}"
PREFIX_NAME = "Role::app_id::{0}::name::{1}"
@classmethod
def get_by_name(cls, app_id, name):
role = cache.get(cls.PREFIX_NAME.format(app_id, name))
if role is None:
role = Role.get_by(app_id=app_id, name=name, first=True, to_dict=False)
if role is None and app_id is None: # try global role
role = Role.get_by(name=name, first=True, to_dict=False)
if role is not None:
cache.set(cls.PREFIX_NAME.format(app_id, name), role)
return role
@classmethod
def get(cls, rid):
role = cache.get(cls.PREFIX_ID.format(rid))
if role is None:
role = Role.get_by_id(rid)
if role is not None:
cache.set(cls.PREFIX_ID.format(rid), role)
return role
@classmethod
def clean(cls, rid):
cache.delete(cls.PREFIX_ID.format(rid))
@classmethod
def clean_by_name(cls, app_id, name):
cache.delete(cls.PREFIX_NAME.format(app_id, name))
class HasResourceRoleCache(object):
PREFIX_KEY = "HasResourceRoleCache::AppId::{0}"
@classmethod
def get(cls, app_id):
return cache.get(cls.PREFIX_KEY.format(app_id)) or {}
@classmethod
def add(cls, rid, app_id):
with Lock('HasResourceRoleCache'):
c = cls.get(app_id)
c[rid] = 1
cache.set(cls.PREFIX_KEY.format(app_id), c, timeout=0)
@classmethod
def remove(cls, rid, app_id):
with Lock('HasResourceRoleCache'):
c = cls.get(app_id)
c.pop(rid, None)
cache.set(cls.PREFIX_KEY.format(app_id), c, timeout=0)
class RoleRelationCache(object):
PREFIX_PARENT = "RoleRelationParent::id::{0}::AppId::{1}"
PREFIX_CHILDREN = "RoleRelationChildren::id::{0}::AppId::{1}"
PREFIX_RESOURCES = "RoleRelationResources::id::{0}::AppId::{1}"
PREFIX_RESOURCES2 = "RoleRelationResources2::id::{0}::AppId::{1}"
@classmethod
def get_parent_ids(cls, rid, app_id):
parent_ids = cache.get(cls.PREFIX_PARENT.format(rid, app_id))
if not parent_ids:
from api.lib.perm.acl.role import RoleRelationCRUD
parent_ids = RoleRelationCRUD.get_parent_ids(rid, app_id)
cache.set(cls.PREFIX_PARENT.format(rid, app_id), parent_ids, timeout=0)
return parent_ids
@classmethod
def get_child_ids(cls, rid, app_id):
child_ids = cache.get(cls.PREFIX_CHILDREN.format(rid, app_id))
if not child_ids:
from api.lib.perm.acl.role import RoleRelationCRUD
child_ids = RoleRelationCRUD.get_child_ids(rid, app_id)
cache.set(cls.PREFIX_CHILDREN.format(rid, app_id), child_ids, timeout=0)
return child_ids
@classmethod
def get_resources(cls, rid, app_id):
"""
:param rid:
:param app_id:
:return: {id2perms: {resource_id: [perm,]}, group2perms: {group_id: [perm, ]}}
"""
resources = cache.get(cls.PREFIX_RESOURCES.format(rid, app_id))
if not resources:
from api.lib.perm.acl.role import RoleCRUD
resources = RoleCRUD.get_resources(rid, app_id)
if resources['id2perms'] or resources['group2perms']:
cache.set(cls.PREFIX_RESOURCES.format(rid, app_id), resources, timeout=0)
return resources or {}
@classmethod
def get_resources2(cls, rid, app_id):
r_g = cache.get(cls.PREFIX_RESOURCES2.format(rid, app_id))
if not r_g:
res = cls.get_resources(rid, app_id)
id2perms = res['id2perms']
group2perms = res['group2perms']
resources, groups = dict(), dict()
for _id in id2perms:
resource = ResourceCache.get(_id)
if not resource:
continue
resource = resource.to_dict()
resource.update(dict(permissions=id2perms[_id]))
resources[_id] = resource
for _id in group2perms:
group = ResourceGroupCache.get(_id)
if not group:
continue
group = group.to_dict()
group.update(dict(permissions=group2perms[_id]))
groups[_id] = group
r_g = msgpack.dumps(dict(resources=resources, groups=groups))
cache.set(cls.PREFIX_RESOURCES2.format(rid, app_id), r_g, timeout=0)
return msgpack.loads(r_g, raw=False)
@classmethod
def rebuild(cls, rid, app_id):
cls.clean(rid, app_id)
db.session.remove()
cls.get_parent_ids(rid, app_id)
cls.get_child_ids(rid, app_id)
resources = cls.get_resources(rid, app_id)
if resources.get('id2perms') or resources.get('group2perms'):
HasResourceRoleCache.add(rid, app_id)
else:
HasResourceRoleCache.remove(rid, app_id)
cls.get_resources2(rid, app_id)
@classmethod
def rebuild2(cls, rid, app_id):
cache.delete(cls.PREFIX_RESOURCES2.format(rid, app_id))
db.session.remove()
cls.get_resources2(rid, app_id)
@classmethod
def clean(cls, rid, app_id):
cache.delete(cls.PREFIX_PARENT.format(rid, app_id))
cache.delete(cls.PREFIX_CHILDREN.format(rid, app_id))
cache.delete(cls.PREFIX_RESOURCES.format(rid, app_id))
cache.delete(cls.PREFIX_RESOURCES2.format(rid, app_id))
class PermissionCache(object):
PREFIX_ID = "Permission::id::{0}::ResourceTypeId::{1}"
PREFIX_NAME = "Permission::name::{0}::ResourceTypeId::{1}"
@classmethod
def get(cls, key, rt_id):
perm = cache.get(cls.PREFIX_ID.format(key, rt_id))
perm = perm or cache.get(cls.PREFIX_NAME.format(key, rt_id))
if perm is None:
perm = Permission.get_by_id(key)
perm = perm or Permission.get_by(name=key, resource_type_id=rt_id, first=True, to_dict=False)
if perm is not None:
cache.set(cls.PREFIX_ID.format(perm.id, rt_id), perm)
cache.set(cls.PREFIX_NAME.format(perm.name, rt_id), perm)
return perm
class ResourceCache(object):
PREFIX_ID = "Resource::id::{0}"
PREFIX_NAME = "Resource::type_id::{0}::name::{1}"
@classmethod
def get(cls, key, type_id=None):
resource = cache.get(cls.PREFIX_ID.format(key)) or cache.get(cls.PREFIX_NAME.format(type_id, key))
if resource is None:
resource = Resource.get_by_id(key) or Resource.get_by(name=key,
resource_type_id=type_id,
to_dict=False,
first=True)
if resource is not None:
cls.set(resource)
return resource
@classmethod
def set(cls, resource):
cache.set(cls.PREFIX_ID.format(resource.id), resource)
cache.set(cls.PREFIX_NAME.format(resource.resource_type_id, resource.name), resource)
@classmethod
def clean(cls, resource):
cache.delete(cls.PREFIX_ID.format(resource.id))
cache.delete(cls.PREFIX_NAME.format(resource.resource_type_id, resource.name))
class ResourceGroupCache(object):
PREFIX_ID = "ResourceGroup::id::{0}"
PREFIX_NAME = "ResourceGroup::type_id::{0}::name::{1}"
@classmethod
def get(cls, key, type_id=None):
group = cache.get(cls.PREFIX_ID.format(key)) or cache.get(cls.PREFIX_NAME.format(type_id, key))
if group is None:
group = ResourceGroup.get_by_id(key) or ResourceGroup.get_by(name=key,
resource_type_id=type_id,
to_dict=False,
first=True)
if group is not None:
cls.set(group)
return group
@classmethod
def set(cls, group):
cache.set(cls.PREFIX_ID.format(group.id), group)
cache.set(cls.PREFIX_NAME.format(group.resource_type_id, group.name), group)
@classmethod
def clean(cls, group):
cache.delete(cls.PREFIX_ID.format(group.id))
cache.delete(cls.PREFIX_NAME.format(group.resource_type_id, group.name))

View File

@ -0,0 +1,13 @@
# -*- coding:utf-8 -*-
from api.lib.utils import BaseEnum
ACL_QUEUE = "acl_async"
class OperateType(BaseEnum):
LOGIN = "0"
READ = "1"
UPDATE = "2"
CREATE = "3"
DELETE = "4"

View File

@ -0,0 +1,304 @@
# -*- coding:utf-8 -*-
import datetime
from flask import abort
from api.extensions import db
from api.lib.perm.acl.audit import AuditCRUD
from api.lib.perm.acl.audit import AuditOperateSource
from api.lib.perm.acl.audit import AuditOperateType
from api.lib.perm.acl.cache import PermissionCache
from api.lib.perm.acl.cache import RoleCache
from api.lib.perm.acl.cache import UserCache
from api.lib.perm.acl.const import ACL_QUEUE
from api.lib.perm.acl.resp_format import ErrFormat
from api.lib.perm.acl.role import RoleRelationCRUD
from api.models.acl import Resource
from api.models.acl import ResourceGroup
from api.models.acl import ResourceType
from api.models.acl import RolePermission
from api.tasks.acl import role_rebuild
class PermissionCRUD(object):
@staticmethod
def get_all(resource_id=None, group_id=None, need_users=True):
result = dict()
if resource_id is not None:
r = Resource.get_by_id(resource_id)
if not r:
return result
rt_id = r.resource_type_id
perms = RolePermission.get_by(resource_id=resource_id, to_dict=False)
else:
rg = ResourceGroup.get_by_id(group_id)
if not rg:
return result
rt_id = rg.resource_type_id
perms = RolePermission.get_by(group_id=group_id, to_dict=False)
rid2obj = dict()
uid2obj = dict()
for perm in perms:
perm_dict = PermissionCache.get(perm.perm_id, rt_id)
perm_dict = perm_dict and perm_dict.to_dict()
if not perm_dict:
continue
perm_dict.update(dict(rid=perm.rid))
if perm.rid not in rid2obj:
rid2obj[perm.rid] = RoleCache.get(perm.rid)
role = rid2obj[perm.rid]
if role and role.uid:
if role.uid not in uid2obj:
uid2obj[role.uid] = UserCache.get(role.uid)
name = uid2obj[role.uid].nickname
elif role:
name = role.name
else:
continue
result.setdefault(name,
dict(perms=[],
users=RoleRelationCRUD.get_users_by_rid(perm.rid, perm.app_id, rid2obj, uid2obj)
if need_users else [])
)['perms'].append(perm_dict)
return result
@classmethod
def get_all2(cls, resource_name, resource_type_name, app_id):
rt = ResourceType.get_by(name=resource_type_name, first=True, to_dict=False)
rt or abort(404, ErrFormat.resource_type_not_found.format(resource_type_name))
r = Resource.get_by(name=resource_name, resource_type_id=rt.id, app_id=app_id, first=True, to_dict=False)
return r and cls.get_all(r.id)
@staticmethod
def grant(rid, perms, resource_id=None, group_id=None, rebuild=True, source=AuditOperateSource.acl):
app_id = None
rt_id = None
from api.lib.perm.acl.resource import ResourceTypeCRUD
if resource_id is not None:
from api.models.acl import Resource
resource = Resource.get_by_id(resource_id) or abort(404, ErrFormat.resource_not_found.format(
"id={}".format(resource_id)))
app_id = resource.app_id
rt_id = resource.resource_type_id
if not perms:
perms = [i.get('name') for i in ResourceTypeCRUD.get_perms(resource.resource_type_id)]
elif group_id is not None:
from api.models.acl import ResourceGroup
group = ResourceGroup.get_by_id(group_id) or abort(
404, ErrFormat.resource_group_not_found.format("id={}".format(group_id)))
app_id = group.app_id
rt_id = group.resource_type_id
if not perms:
perms = [i.get('name') for i in ResourceTypeCRUD.get_perms(group.resource_type_id)]
_role_permissions = []
for _perm in set(perms):
perm = PermissionCache.get(_perm, rt_id)
if not perm:
continue
existed = RolePermission.get_by(rid=rid,
app_id=app_id,
perm_id=perm.id,
group_id=group_id,
resource_id=resource_id)
if not existed:
__role_permission = RolePermission.create(rid=rid,
app_id=app_id,
perm_id=perm.id,
group_id=group_id,
resource_id=resource_id)
_role_permissions.append(__role_permission)
if rebuild:
role_rebuild.apply_async(args=(rid, app_id), queue=ACL_QUEUE)
AuditCRUD.add_permission_log(app_id, AuditOperateType.grant, rid, rt_id, _role_permissions,
source=source)
@staticmethod
def batch_grant_by_resource_names(rid, perms, resource_type_id, resource_names,
resource_ids=None, perm_map=None, app_id=None):
from api.lib.perm.acl.resource import ResourceTypeCRUD
if resource_names:
resource_ids = []
from api.models.acl import Resource
for n in resource_names:
resource = Resource.get_by(name=n, resource_type_id=resource_type_id, first=True, to_dict=False)
if resource:
app_id = resource.app_id
if not perms:
perms = [i.get('name') for i in ResourceTypeCRUD.get_perms(resource.resource_type_id)]
resource_ids.append(resource.id)
resource_ids = resource_ids or []
_role_permissions = []
if isinstance(perm_map, dict):
perm2resource = dict()
for resource_id in resource_ids:
for _perm in (perm_map.get(str(resource_id)) or []):
perm2resource.setdefault(_perm, []).append(resource_id)
for _perm in perm2resource:
perm = PermissionCache.get(_perm, resource_type_id)
existeds = RolePermission.get_by(rid=rid,
app_id=app_id,
perm_id=perm.id,
__func_in___key_resource_id=perm2resource[_perm],
to_dict=False)
for resource_id in (set(perm2resource[_perm]) - set([i.resource_id for i in existeds])):
_role_permission = RolePermission.create(flush=False,
commit=False,
rid=rid,
app_id=app_id,
perm_id=perm.id,
resource_id=resource_id,
)
_role_permissions.append(_role_permission)
db.session.commit()
else:
for _perm in perms:
perm = PermissionCache.get(_perm, resource_type_id)
for resource_id in resource_ids:
existed = RolePermission.get_by(rid=rid,
app_id=app_id,
perm_id=perm.id,
resource_id=resource_id)
if not existed:
_role_permission = RolePermission.create(rid=rid,
app_id=app_id,
perm_id=perm.id,
resource_id=resource_id)
_role_permissions.append(_role_permission)
role_rebuild.apply_async(args=(rid, app_id), queue=ACL_QUEUE)
AuditCRUD.add_permission_log(app_id, AuditOperateType.grant, rid, resource_type_id, _role_permissions)
@staticmethod
def revoke(rid, perms, resource_id=None, group_id=None, rebuild=True, source=AuditOperateSource.acl):
app_id = None
rt_id = None
from api.lib.perm.acl.resource import ResourceTypeCRUD
if resource_id is not None:
from api.models.acl import Resource
resource = Resource.get_by_id(resource_id) or abort(
404, ErrFormat.resource_not_found.format("id={}".format(resource_id)))
app_id = resource.app_id
rt_id = resource.resource_type_id
if not perms:
perms = [i.get('name') for i in ResourceTypeCRUD.get_perms(resource.resource_type_id)]
elif group_id is not None:
from api.models.acl import ResourceGroup
group = ResourceGroup.get_by_id(group_id) or abort(
404, ErrFormat.resource_group_not_found.format("id={}".format(group_id)))
app_id = group.app_id
rt_id = group.resource_type_id
if not perms:
perms = [i.get('name') for i in ResourceTypeCRUD.get_perms(group.resource_type_id)]
_role_permissions = []
for perm in perms:
perm = PermissionCache.get(perm, rt_id)
if not perm:
continue
existed = RolePermission.get_by(rid=rid,
perm_id=perm.id,
group_id=group_id,
resource_id=resource_id,
first=True,
to_dict=False)
if existed:
existed.soft_delete()
_role_permissions.append(existed)
if rebuild:
role_rebuild.apply_async(args=(rid, app_id), queue=ACL_QUEUE)
AuditCRUD.add_permission_log(app_id, AuditOperateType.revoke, rid, rt_id, _role_permissions,
source=source)
@staticmethod
def batch_revoke_by_resource_names(rid, perms, resource_type_id, resource_names,
resource_ids=None, perm_map=None, app_id=None):
from api.lib.perm.acl.resource import ResourceTypeCRUD
if resource_names:
resource_ids = []
from api.models.acl import Resource
for n in resource_names:
resource = Resource.get_by(name=n, resource_type_id=resource_type_id, first=True, to_dict=False)
if resource:
app_id = resource.app_id
if not perms:
perms = [i.get('name') for i in ResourceTypeCRUD.get_perms(resource.resource_type_id)]
resource_ids.append(resource.id)
resource_ids = resource_ids or []
_role_permissions = []
if isinstance(perm_map, dict):
perm2resource = dict()
for resource_id in resource_ids:
for _perm in (perm_map.get(str(resource_id)) or []):
perm2resource.setdefault(_perm, []).append(resource_id)
for _perm in perm2resource:
perm = PermissionCache.get(_perm, resource_type_id)
existeds = RolePermission.get_by(rid=rid,
app_id=app_id,
perm_id=perm.id,
__func_in___key_resource_id=perm2resource[_perm],
to_dict=False)
for existed in existeds:
existed.deleted = True
existed.deleted_at = datetime.datetime.now()
db.session.add(existed)
_role_permissions.append(existed)
db.session.commit()
else:
for _perm in perms:
perm = PermissionCache.get(_perm, resource_type_id)
for resource_id in resource_ids:
existed = RolePermission.get_by(rid=rid,
app_id=app_id,
perm_id=perm.id,
resource_id=resource_id,
first=True, to_dict=False)
if existed:
existed.soft_delete()
_role_permissions.append(existed)
role_rebuild.apply_async(args=(rid, app_id), queue=ACL_QUEUE)
AuditCRUD.add_permission_log(app_id, AuditOperateType.revoke, rid, resource_type_id, _role_permissions)

View File

@ -0,0 +1,21 @@
# -*- coding:utf-8 -*-
from api.models.acl import OperationRecord
class OperateRecordCRUD(object):
@staticmethod
def search(page, page_size, **kwargs):
query = OperationRecord.get_by(only_query=True)
for k, v in kwargs.items():
if hasattr(OperationRecord, k) and v:
query = query.filter(getattr(OperationRecord, k) == v)
numfound = query.count()
res = query.offset((page - 1) * page_size).limit(page_size)
return numfound, res
@staticmethod
def add(app, rolename, operate, obj):
OperationRecord.create(app=app, rolename=rolename, operate=operate, obj=obj)

View File

@ -0,0 +1,343 @@
# -*- coding:utf-8 -*-
from flask import abort
from flask import current_app
from api.extensions import db
from api.lib.perm.acl.audit import AuditCRUD
from api.lib.perm.acl.audit import AuditOperateType
from api.lib.perm.acl.audit import AuditScope
from api.lib.perm.acl.cache import ResourceCache
from api.lib.perm.acl.cache import ResourceGroupCache
from api.lib.perm.acl.cache import UserCache
from api.lib.perm.acl.const import ACL_QUEUE
from api.lib.perm.acl.resp_format import ErrFormat
from api.lib.perm.acl.trigger import TriggerCRUD
from api.models.acl import Permission
from api.models.acl import Resource
from api.models.acl import ResourceGroup
from api.models.acl import ResourceGroupItems
from api.models.acl import ResourceType
from api.models.acl import RolePermission
from api.tasks.acl import role_rebuild
from api.tasks.acl import update_resource_to_build_role
class ResourceTypeCRUD(object):
cls = ResourceType
@staticmethod
def search(q, app_id, page=1, page_size=None):
query = db.session.query(ResourceType).filter(
ResourceType.deleted.is_(False)).filter(ResourceType.app_id == app_id)
if q:
query = query.filter(ResourceType.name.ilike('%{0}%'.format(q)))
numfound = query.count()
res = query.offset((page - 1) * page_size).limit(page_size)
rt_ids = [i.id for i in res]
perms = db.session.query(Permission).filter(Permission.deleted.is_(False)).filter(
Permission.resource_type_id.in_(rt_ids))
id2perms = dict()
for perm in perms:
id2perms.setdefault(perm.resource_type_id, []).append(perm.to_dict())
return numfound, res, id2perms
@classmethod
def id2name(cls):
return {i.id: i.name for i in ResourceType.get_by(to_dict=False)}
@staticmethod
def get_by_name(app_id, name):
resource_type = ResourceType.get_by(first=True, app_id=app_id, name=name, to_dict=False)
return resource_type
@staticmethod
def get_perms(rt_id):
perms = Permission.get_by(resource_type_id=rt_id, to_dict=False)
return [i.to_dict() for i in perms]
@classmethod
def add(cls, app_id, name, description, perms):
ResourceType.get_by(name=name, app_id=app_id) and abort(400, ErrFormat.resource_type_exists.format(name))
rt = ResourceType.create(name=name, description=description, app_id=app_id)
_, current_perm_ids = cls.update_perms(rt.id, perms, app_id)
AuditCRUD.add_resource_log(app_id, AuditOperateType.create,
AuditScope.resource_type, rt.id, {}, rt.to_dict(),
{'permission_ids': {'current': current_perm_ids, 'origin': []}, }
)
return rt
@classmethod
def update(cls, rt_id, **kwargs):
kwargs.pop('app_id', None)
rt = ResourceType.get_by_id(rt_id) or abort(404,
ErrFormat.resource_type_not_found.format("id={}".format(rt_id)))
if 'name' in kwargs:
other = ResourceType.get_by(name=kwargs['name'], app_id=rt.app_id, to_dict=False, first=True)
if other and other.id != rt_id:
return abort(400, ErrFormat.resource_type_exists.format(kwargs['name']))
perms = kwargs.pop('perms', None)
current_perm_ids = []
existed_perm_ids = []
if perms:
existed_perm_ids, current_perm_ids = cls.update_perms(rt_id, perms, rt.app_id)
origin = rt.to_dict()
rt = rt.update(**kwargs)
AuditCRUD.add_resource_log(rt.app_id, AuditOperateType.update,
AuditScope.resource_type, rt.id, origin, rt.to_dict(),
{'permission_ids': {'current': current_perm_ids, 'origin': existed_perm_ids}, }
)
return rt
@classmethod
def delete(cls, rt_id):
rt = ResourceType.get_by_id(rt_id) or abort(
404, ErrFormat.resource_type_not_found.format("id={}".format(rt_id)))
Resource.get_by(resource_type_id=rt_id) and abort(400, ErrFormat.resource_type_cannot_delete)
origin = rt.to_dict()
existed_perm_ids, _ = cls.update_perms(rt_id, [], rt.app_id)
rt.soft_delete()
AuditCRUD.add_resource_log(rt.app_id, AuditOperateType.delete,
AuditScope.resource_type, rt.id, origin, {},
{'permission_ids': {'current': [], 'origin': existed_perm_ids}, }
)
@classmethod
def update_perms(cls, rt_id, perms, app_id):
existed = Permission.get_by(resource_type_id=rt_id, to_dict=False)
existed_names = [i.name for i in existed]
existed_ids = [i.id for i in existed]
current_ids = []
for i in existed:
if i.name not in perms:
i.soft_delete()
else:
current_ids.append(i.id)
for i in perms:
if i not in existed_names:
p = Permission.create(resource_type_id=rt_id,
name=i,
app_id=app_id)
current_ids.append(p.id)
return existed_ids, current_ids
class ResourceGroupCRUD(object):
cls = ResourceGroup
@staticmethod
def search(q, app_id, resource_type_id, page=1, page_size=None):
query = db.session.query(ResourceGroup).filter(
ResourceGroup.deleted.is_(False)).filter(ResourceGroup.app_id == app_id).filter(
ResourceGroup.resource_type_id == resource_type_id)
if q:
query = query.filter(ResourceGroup.name.ilike("%{0}%".format(q)))
numfound = query.count()
return numfound, query.offset((page - 1) * page_size).limit(page_size)
@staticmethod
def get_items(rg_id):
items = ResourceGroupItems.get_by(group_id=rg_id, to_dict=False)
return [i.resource.to_dict() for i in items]
@staticmethod
def add(name, type_id, app_id, uid=None):
ResourceGroup.get_by(name=name, resource_type_id=type_id, app_id=app_id) and abort(
400, ErrFormat.resource_group_exists.format(name))
rg = ResourceGroup.create(name=name, resource_type_id=type_id, app_id=app_id, uid=uid)
AuditCRUD.add_resource_log(app_id, AuditOperateType.create,
AuditScope.resource_group, rg.id, {}, rg.to_dict(), {})
return rg
@staticmethod
def update(rg_id, items):
rg = ResourceGroup.get_by_id(rg_id) or abort(
404, ErrFormat.resource_group_not_found.format("id={}".format(rg_id)))
existed = ResourceGroupItems.get_by(group_id=rg_id, to_dict=False)
existed_ids = [i.resource_id for i in existed]
for i in existed:
if i.resource_id not in items:
i.soft_delete()
for _id in items:
if _id not in existed_ids:
ResourceGroupItems.create(group_id=rg_id, resource_id=_id)
AuditCRUD.add_resource_log(rg.app_id, AuditOperateType.update,
AuditScope.resource_group, rg.id, rg.to_dict(), rg.to_dict(),
{'resource_ids': {'current': items, 'origin': existed_ids}, }
)
@staticmethod
def delete(rg_id):
rg = ResourceGroup.get_by_id(rg_id) or abort(
404, ErrFormat.resource_group_not_found.format("id={}".format(rg_id)))
origin = rg.to_dict()
rg.soft_delete()
items = ResourceGroupItems.get_by(group_id=rg_id, to_dict=False)
existed_ids = []
for item in items:
existed_ids.append(item.resource_id)
item.soft_delete()
rebuild = set()
for i in RolePermission.get_by(group_id=rg_id, to_dict=False):
i.soft_delete()
rebuild.add(i.rid)
for _rid in rebuild:
role_rebuild.apply_async(args=(_rid, rg.app_id), queue=ACL_QUEUE)
ResourceGroupCache.clean(rg)
AuditCRUD.add_resource_log(rg.app_id, AuditOperateType.delete,
AuditScope.resource_group, rg.id, origin, {},
{'resource_ids': {'current': [], 'origin': existed_ids}, }
)
class ResourceCRUD(object):
cls = Resource
@staticmethod
def _parse_resource_type_id(type_id, app_id):
try:
type_id = int(type_id)
except ValueError:
_type = ResourceType.get_by(name=type_id, app_id=app_id, first=True, to_dict=False)
type_id = _type and _type.id
return type_id
@classmethod
def search(cls, q, u, app_id, resource_type_id=None, page=1, page_size=None):
query = Resource.query.filter(
Resource.deleted.is_(False)).filter(Resource.app_id == app_id)
if q:
query = query.filter(Resource.name.ilike("%{0}%".format(q)))
if u and UserCache.get(u):
query = query.filter(Resource.uid == UserCache.get(u).uid)
if resource_type_id:
resource_type_id = cls._parse_resource_type_id(resource_type_id, app_id)
query = query.filter(Resource.resource_type_id == resource_type_id)
numfound = query.count()
res = [i.to_dict() for i in query.offset((page - 1) * page_size).limit(page_size)]
for i in res:
i['user'] = UserCache.get(i['uid']).nickname if i['uid'] else ''
return numfound, res
@classmethod
def add(cls, name, type_id, app_id, uid=None):
type_id = cls._parse_resource_type_id(type_id, app_id)
Resource.get_by(name=name, resource_type_id=type_id, app_id=app_id) and abort(
400, ErrFormat.resource_exists.format(name))
r = Resource.create(name=name, resource_type_id=type_id, app_id=app_id, uid=uid)
from api.tasks.acl import apply_trigger
triggers = TriggerCRUD.match_triggers(app_id, r.name, r.resource_type_id, uid)
current_app.logger.info(triggers)
for trigger in triggers:
# auto trigger should be no uid
apply_trigger.apply_async(args=(trigger.id,),
kwargs=dict(resource_id=r.id, ), queue=ACL_QUEUE)
AuditCRUD.add_resource_log(app_id, AuditOperateType.create,
AuditScope.resource, r.id, {}, r.to_dict(), {})
return r
@staticmethod
def update(_id, name):
# todo trigger rebuild
resource = Resource.get_by_id(_id) or abort(404, ErrFormat.resource_not_found.format("id={}".format(_id)))
origin = resource.to_dict()
other = Resource.get_by(name=name, resource_type_id=resource.resource_type_id, to_dict=False, first=True)
if other and other.id != _id:
return abort(400, ErrFormat.resource_exists.format(name))
ResourceCache.clean(resource)
resource = resource.update(name=name)
update_resource_to_build_role.apply_async(args=(_id, resource.app_id), queue=ACL_QUEUE)
AuditCRUD.add_resource_log(resource.app_id, AuditOperateType.update,
AuditScope.resource, resource.id, origin, resource.to_dict(), {})
return resource
@staticmethod
def delete(_id):
resource = Resource.get_by_id(_id) or abort(404, ErrFormat.resource_not_found.format("id={}".format(_id)))
origin = resource.to_dict()
resource.soft_delete()
ResourceCache.clean(resource)
rebuilds = []
for i in RolePermission.get_by(resource_id=_id, to_dict=False):
i.soft_delete()
rebuilds.append((i.rid, i.app_id))
for rid, app_id in set(rebuilds):
role_rebuild.apply_async(args=(rid, app_id), queue=ACL_QUEUE)
AuditCRUD.add_resource_log(resource.app_id, AuditOperateType.delete,
AuditScope.resource, resource.id, origin, {}, {})
@classmethod
def delete_by_name(cls, name, type_id, app_id):
resource = Resource.get_by(name=name, resource_type_id=type_id, app_id=app_id) or abort(
400, ErrFormat.resource_exists.format(name))
return cls.delete(resource.id)
@classmethod
def update_by_name(cls, name, type_id, app_id, new_name):
resource = Resource.get_by(name=name, resource_type_id=type_id, app_id=app_id) or abort(
400, ErrFormat.resource_exists.format(name))
return cls.update(resource.id, new_name)

View File

@ -0,0 +1,43 @@
# -*- coding:utf-8 -*-
from api.lib.resp_format import CommonErrFormat
class ErrFormat(CommonErrFormat):
auth_only_with_app_token_failed = "应用 Token验证失败"
session_invalid = "您不是应用管理员 或者 session失效(尝试一下退出重新登录)"
resource_type_not_found = "资源类型 {} 不存在!"
resource_type_exists = "资源类型 {} 已经存在!"
resource_type_cannot_delete = "因为该类型下有资源的存在, 不能删除!"
user_not_found = "用户 {} 不存在!"
user_exists = "用户 {} 已经存在!"
role_not_found = "角色 {} 不存在!"
role_exists = "角色 {} 已经存在!"
global_role_not_found = "全局角色 {} 不存在!"
global_role_exists = "全局角色 {} 已经存在!"
user_role_delete_invalid = "删除用户角色, 请在 用户管理 页面操作!"
resource_no_permission = "您没有资源: {}{} 权限"
admin_required = "需要管理员权限"
role_required = "需要角色: {}"
app_is_ready_existed = "应用 {} 已经存在"
app_not_found = "应用 {} 不存在!"
app_secret_invalid = "应用的Secret无效"
resource_not_found = "资源 {} 不存在!"
resource_exists = "资源 {} 已经存在!"
resource_group_not_found = "资源组 {} 不存在!"
resource_group_exists = "资源组 {} 已经存在!"
inheritance_dead_loop = "继承检测到了死循环"
role_relation_not_found = "角色关系 {} 不存在!"
trigger_not_found = "触发器 {} 不存在!"
trigger_exists = "触发器 {} 已经存在!"
trigger_disabled = "触发器 {} 已经被禁用!"
invalid_password = "密码不正确!"

View File

@ -0,0 +1,498 @@
# -*- coding:utf-8 -*-
import time
import six
from flask import abort
from flask import current_app
from sqlalchemy import or_
from api.extensions import db
from api.lib.perm.acl.app import AppCRUD
from api.lib.perm.acl.audit import AuditCRUD, AuditOperateType, AuditScope
from api.lib.perm.acl.cache import AppCache
from api.lib.perm.acl.cache import HasResourceRoleCache
from api.lib.perm.acl.cache import RoleCache
from api.lib.perm.acl.cache import RoleRelationCache
from api.lib.perm.acl.cache import UserCache
from api.lib.perm.acl.const import ACL_QUEUE
from api.lib.perm.acl.const import OperateType
from api.lib.perm.acl.resource import ResourceGroupCRUD
from api.lib.perm.acl.resp_format import ErrFormat
from api.models.acl import Resource, ResourceGroup
from api.models.acl import ResourceGroupItems
from api.models.acl import ResourceType
from api.models.acl import Role
from api.models.acl import RolePermission
from api.models.acl import RoleRelation
from api.tasks.acl import op_record
from api.tasks.acl import role_rebuild
class RoleRelationCRUD(object):
cls = RoleRelation
@staticmethod
def get_parents(rids=None, uids=None, app_id=None, all_app=False):
rid2uid = dict()
if uids is not None:
uids = [uids] if isinstance(uids, six.integer_types) else uids
rids = db.session.query(Role).filter(Role.deleted.is_(False)).filter(Role.uid.in_(uids))
rid2uid = {i.id: i.uid for i in rids}
rids = [i.id for i in rids]
else:
rids = [rids] if isinstance(rids, six.integer_types) else rids
if app_id is not None:
res = db.session.query(RoleRelation).filter(
RoleRelation.child_id.in_(rids)).filter(RoleRelation.app_id == app_id).filter(
RoleRelation.deleted.is_(False)).union(
db.session.query(RoleRelation).filter(
RoleRelation.child_id.in_(rids)).filter(RoleRelation.app_id.is_(None)).filter(
RoleRelation.deleted.is_(False)))
elif not all_app:
res = db.session.query(RoleRelation).filter(
RoleRelation.child_id.in_(rids)).filter(RoleRelation.app_id.is_(None)).filter(
RoleRelation.deleted.is_(False))
else:
res = db.session.query(RoleRelation).filter(
RoleRelation.child_id.in_(rids)).filter(RoleRelation.deleted.is_(False))
id2parents = {}
for i in res:
id2parents.setdefault(rid2uid.get(i.child_id, i.child_id), []).append(RoleCache.get(i.parent_id).to_dict())
return id2parents
@staticmethod
def get_parent_ids(rid, app_id):
if app_id is not None:
return [i.parent_id for i in RoleRelation.get_by(child_id=rid, app_id=app_id, to_dict=False)] + \
[i.parent_id for i in RoleRelation.get_by(child_id=rid, app_id=None, to_dict=False)]
else:
return [i.parent_id for i in RoleRelation.get_by(child_id=rid, app_id=app_id, to_dict=False)]
@staticmethod
def get_child_ids(rid, app_id):
if app_id is not None:
return [i.child_id for i in RoleRelation.get_by(parent_id=rid, app_id=app_id, to_dict=False)] + \
[i.child_id for i in RoleRelation.get_by(parent_id=rid, app_id=None, to_dict=False)]
else:
return [i.child_id for i in RoleRelation.get_by(parent_id=rid, app_id=app_id, to_dict=False)]
@classmethod
def recursive_parent_ids(cls, rid, app_id):
all_parent_ids = set()
def _get_parent(_id):
all_parent_ids.add(_id)
parent_ids = RoleRelationCache.get_parent_ids(_id, app_id)
for parent_id in parent_ids:
_get_parent(parent_id)
_get_parent(rid)
return all_parent_ids
@classmethod
def recursive_child_ids(cls, rid, app_id):
all_child_ids = set()
def _get_children(_id):
all_child_ids.add(_id)
child_ids = RoleRelationCache.get_child_ids(_id, app_id)
for child_id in child_ids:
_get_children(child_id)
_get_children(rid)
return all_child_ids
@classmethod
def get_users_by_rid(cls, rid, app_id, rid2obj=None, uid2obj=None):
rid2obj = rid2obj or dict()
uid2obj = uid2obj or dict()
users = []
rids = cls.recursive_child_ids(rid, app_id)
for rid in rids:
if rid not in rid2obj:
rid2obj[rid] = RoleCache.get(rid)
role = rid2obj[rid]
if role and role.uid:
if role.uid and role.uid not in uid2obj:
uid2obj[role.uid] = UserCache.get(role.uid)
u = uid2obj.get(role.uid)
u = u and u.to_dict()
if u:
u.update(dict(role=role.to_dict()))
users.append(u)
# todo role read log
# user_id = AuditCRUD.get_current_operate_uid()
# audit_role_log.apply_async(args=(app_id, user_id, result.copy()),
# queue=ACL_QUEUE)
return users
@classmethod
def add(cls, role, parent_id, child_ids, app_id):
result = []
for child_id in child_ids:
existed = RoleRelation.get_by(parent_id=parent_id, child_id=child_id, app_id=app_id)
if existed:
continue
RoleRelationCache.clean(parent_id, app_id)
RoleRelationCache.clean(child_id, app_id)
if parent_id in cls.recursive_child_ids(child_id, app_id):
return abort(400, ErrFormat.inheritance_dead_loop)
if app_id is None:
for app in AppCRUD.get_all():
if app.name != "acl":
RoleRelationCache.clean(child_id, app.id)
result.append(RoleRelation.create(parent_id=parent_id, child_id=child_id, app_id=app_id).to_dict())
AuditCRUD.add_role_log(app_id, AuditOperateType.role_relation_add,
AuditScope.role_relation, role.id, {}, {},
{'child_ids': list(child_ids), 'parent_ids': [parent_id], }
)
return result
@classmethod
def delete(cls, _id, app_id):
existed = RoleRelation.get_by_id(_id) or abort(
400, ErrFormat.role_relation_not_found.format("id={}".format(_id)))
child_ids = cls.recursive_child_ids(existed.child_id, app_id)
for child_id in child_ids:
role_rebuild.apply_async(args=(child_id, app_id), queue=ACL_QUEUE)
role = RoleCache.get(existed.parent_id)
existed.soft_delete()
RoleRelationCache.clean(existed.parent_id, app_id)
RoleRelationCache.clean(existed.child_id, app_id)
AuditCRUD.add_role_log(app_id, AuditOperateType.role_relation_delete,
AuditScope.role_relation, role.id, {}, {},
{'child_ids': list(child_ids), 'parent_ids': [existed.parent_id], }
)
@classmethod
def delete2(cls, parent_id, child_id, app_id):
existed = RoleRelation.get_by(parent_id=parent_id, child_id=child_id, app_id=app_id, first=True, to_dict=False)
existed or abort(400, ErrFormat.role_relation_not_found.format("{} -> {}".format(parent_id, child_id)))
role = RoleCache.get(existed.parent_id)
existed.soft_delete()
child_ids = cls.recursive_child_ids(existed.child_id, app_id)
for child_id in child_ids:
role_rebuild.apply_async(args=(child_id, app_id), queue=ACL_QUEUE)
RoleRelationCache.clean(existed.parent_id, app_id)
RoleRelationCache.clean(existed.child_id, app_id)
AuditCRUD.add_role_log(app_id, AuditOperateType.role_relation_delete,
AuditScope.role_relation, role.id, {}, {},
{'child_ids': list(child_ids), 'parent_ids': [existed.parent_id], }
)
class RoleCRUD(object):
cls = Role
@staticmethod
def search(q, app_id, page=1, page_size=None, user_role=True, is_all=False, user_only=False):
if user_only: # only user role
query = db.session.query(Role).filter(Role.deleted.is_(False)).filter(Role.uid.isnot(None))
else:
query = db.session.query(Role).filter(Role.deleted.is_(False)).filter(
or_(Role.app_id == app_id, Role.app_id.is_(None)))
if not user_role: # only virtual role
query = query.filter(Role.uid.is_(None))
if not is_all:
role_ids = list(HasResourceRoleCache.get(app_id).keys())
query = query.filter(Role.id.in_(role_ids))
if q:
query = query.filter(Role.name.ilike('%{0}%'.format(q)))
numfound = query.count()
return numfound, query.offset((page - 1) * page_size).limit(page_size)
@staticmethod
def add_role(name, app_id=None, is_app_admin=False, uid=None, password=None):
if app_id and AppCache.get(app_id).name == "acl":
app_id = None
Role.get_by(name=name, app_id=app_id) and abort(400, ErrFormat.role_exists.format(name))
if app_id is not None:
Role.get_by(name=name, app_id=None) and abort(400, ErrFormat.global_role_exists.format(name))
from api.lib.perm.acl.user import UserCRUD
key, secret = UserCRUD.gen_key_secret()
role = Role.create(name=name,
app_id=app_id,
is_app_admin=is_app_admin,
password=password,
key=key,
secret=secret,
uid=uid)
AuditCRUD.add_role_log(app_id, AuditOperateType.create,
AuditScope.role, role.id, {}, role.to_dict(), {})
return role
@staticmethod
def update_role(rid, **kwargs):
kwargs.pop('app_id', None)
role = Role.get_by_id(rid) or abort(404, ErrFormat.role_not_found.format("rid={}".format(rid)))
origin = role.to_dict()
RoleCache.clean(rid)
role = role.update(**kwargs)
if origin['uid'] and kwargs.get('name') and kwargs.get('name') != origin['name']:
from api.models.acl import User
user = User.get_by(uid=origin['uid'], first=True, to_dict=False)
if user:
user.update(username=kwargs['name'])
AuditCRUD.add_role_log(role.app_id, AuditOperateType.update,
AuditScope.role, role.id, origin, role.to_dict(), {},
)
return role
@staticmethod
def get_by_name(name, app_id):
role = Role.get_by(name=name, app_id=app_id)
return role
@classmethod
def delete_role(cls, rid, force=False):
from api.lib.perm.acl.acl import is_admin
role = Role.get_by_id(rid) or abort(404, ErrFormat.role_not_found.format("rid={}".format(rid)))
if not role.app_id and not is_admin():
return abort(403, ErrFormat.admin_required)
not force and role.uid and abort(400, ErrFormat.user_role_delete_invalid)
origin = role.to_dict()
child_ids = []
parent_ids = []
recursive_child_ids = list(RoleRelationCRUD.recursive_child_ids(rid, role.app_id))
for i in RoleRelation.get_by(parent_id=rid, to_dict=False):
child_ids.append(i.child_id)
i.soft_delete()
for i in RoleRelation.get_by(child_id=rid, to_dict=False):
parent_ids.append(i.parent_id)
i.soft_delete()
role_permissions = []
for i in RolePermission.get_by(rid=rid, to_dict=False):
role_permissions.append(i.to_dict())
i.soft_delete()
role.soft_delete()
role_rebuild.apply_async(args=(recursive_child_ids, role.app_id), queue=ACL_QUEUE)
RoleCache.clean(rid)
RoleRelationCache.clean(rid, role.app_id)
AuditCRUD.add_role_log(role.app_id, AuditOperateType.delete,
AuditScope.role, role.id, origin, {},
{'child_ids': child_ids, 'parent_ids': parent_ids,
'role_permissions': role_permissions, },
)
@staticmethod
def get_resources(rid, app_id):
res = RolePermission.get_by(rid=rid, app_id=app_id, to_dict=False)
id2perms = dict(id2perms={}, group2perms={})
for i in res:
if i.resource_id:
id2perms['id2perms'].setdefault(i.resource_id, []).append(i.perm.name)
elif i.group_id:
id2perms['group2perms'].setdefault(i.group_id, []).append(i.perm.name)
return id2perms
@staticmethod
def _extend_resources(rid, resource_type_id, app_id):
res = RoleRelationCache.get_resources2(rid, app_id)
resources = {_id: res['resources'][_id] for _id in res['resources']
if not resource_type_id or resource_type_id == res['resources'][_id]['resource_type_id']}
groups = {_id: res['groups'][_id] for _id in res['groups']
if not resource_type_id or resource_type_id == res['groups'][_id]['resource_type_id']}
return resources, groups
@classmethod
def recursive_resources(cls, rid, app_id, resource_type_id=None, group_flat=True, to_record=False):
def _merge(a, b):
for i in b:
if i in a:
a[i]['permissions'] = list(set(a[i]['permissions'] + b[i]['permissions']))
else:
a[i] = b[i]
return a
try:
resource_type_id = resource_type_id and int(resource_type_id)
except ValueError:
resource_type = ResourceType.get_by(name=resource_type_id, app_id=app_id, first=True, to_dict=False)
resource_type_id = resource_type and resource_type.id
result = dict(resources=dict(), groups=dict())
s = time.time()
parent_ids = RoleRelationCRUD.recursive_parent_ids(rid, app_id)
current_app.logger.info('parent ids {0}: {1}'.format(parent_ids, time.time() - s))
for parent_id in parent_ids:
_resources, _groups = cls._extend_resources(parent_id, resource_type_id, app_id)
current_app.logger.info('middle1: {0}'.format(time.time() - s))
_merge(result['resources'], _resources)
current_app.logger.info('middle2: {0}'.format(time.time() - s))
current_app.logger.info(len(_groups))
if not group_flat:
_merge(result['groups'], _groups)
else:
for rg_id in _groups:
items = ResourceGroupCRUD.get_items(rg_id)
for item in items:
if not resource_type_id or resource_type_id == item['resource_type_id']:
item.setdefault('permissions', [])
item['permissions'] = list(set(item['permissions'] + _groups[rg_id]['permissions']))
result['resources'][item['id']] = item
current_app.logger.info('End: {0}'.format(time.time() - s))
result['resources'] = list(result['resources'].values())
result['groups'] = list(result['groups'].values())
if to_record:
op_record.apply_async(args=(app_id, rid, OperateType.READ, ["resources"]),
queue=ACL_QUEUE)
# todo role read log
# user_id = AuditCRUD.get_current_operate_uid()
# audit_role_log.apply_async(args=(app_id, user_id, result.copy()),
# queue=ACL_QUEUE)
return result
@staticmethod
def get_group_ids(resource_id):
return [i.group_id for i in ResourceGroupItems.get_by(resource_id=resource_id, to_dict=False)]
@classmethod
def has_permission(cls, rid, resource_name, resource_type_name, app_id, perm, resource_id=None):
current_app.logger.debug((rid, resource_name, resource_type_name, app_id, perm))
if not resource_id:
resource_type = ResourceType.get_by(app_id=app_id, name=resource_type_name, first=True, to_dict=False)
resource_type or abort(404, ErrFormat.resource_type_not_found.format(resource_type_name))
type_id = resource_type.id
resource = Resource.get_by(name=resource_name, resource_type_id=type_id, first=True, to_dict=False)
resource = resource or abort(403, ErrFormat.resource_not_found.format(resource_name))
resource_id = resource.id
parent_ids = RoleRelationCRUD.recursive_parent_ids(rid, app_id)
group_ids = cls.get_group_ids(resource_id)
for parent_id in parent_ids:
id2perms = RoleRelationCache.get_resources(parent_id, app_id)
current_app.logger.debug(id2perms)
perms = id2perms['id2perms'].get(resource_id, [])
if perms and {perm}.issubset(set(perms)):
return True
for group_id in group_ids:
perms = id2perms['group2perms'].get(group_id, [])
if perms and {perm}.issubset(set(perms)):
return True
return False
@classmethod
def get_permissions(cls, rid, resource_name, app_id):
resource = Resource.get_by(name=resource_name, first=True, to_dict=False)
resource = resource or abort(403, ErrFormat.resource_not_found.format(resource_name))
parent_ids = RoleRelationCRUD.recursive_parent_ids(rid, app_id)
group_ids = cls.get_group_ids(resource.id)
perms = []
for parent_id in parent_ids:
id2perms = RoleRelationCache.get_resources(parent_id, app_id)
perms += id2perms['id2perms'].get(parent_id, [])
for group_id in group_ids:
perms += id2perms['group2perms'].get(group_id, [])
return set(perms)
@classmethod
def list_resources(cls, app_id, rids, resource_type_id=None, q=None):
query = db.session.query(Resource, RolePermission).filter(
Resource.app_id == app_id,
Resource.deleted.is_(False),
RolePermission.deleted.is_(False),
RolePermission.rid.in_(rids),
).join(
RolePermission, Resource.id == RolePermission.resource_id
)
if resource_type_id:
query = query.filter(Resource.resource_type_id == resource_type_id)
if q:
query = query.filter(Resource.resource_type_id == resource_type_id)
return query.all()
@classmethod
def list_resource_groups(cls, app_id, rids, resource_type_id=None, q=None):
query = db.session.query(ResourceGroup, RolePermission).filter(
ResourceGroup.app_id == app_id,
ResourceGroup.deleted.is_(False),
RolePermission.deleted.is_(False),
RolePermission.rid.in_(rids),
).join(
RolePermission, ResourceGroup.id == RolePermission.group_id
)
if resource_type_id:
query = query.filter(ResourceGroup.resource_type_id == resource_type_id)
if q:
query = query.filter(ResourceGroup.resource_type_id == resource_type_id)
return query.all()

View File

@ -0,0 +1,163 @@
# -*- coding:utf-8 -*-
import copy
import json
import re
from fnmatch import fnmatch
from flask import abort
from api.lib.perm.acl.audit import AuditCRUD
from api.lib.perm.acl.audit import AuditOperateType
from api.lib.perm.acl.cache import UserCache
from api.lib.perm.acl.const import ACL_QUEUE
from api.lib.perm.acl.resp_format import ErrFormat
from api.models.acl import Trigger
from api.tasks.acl import apply_trigger, cancel_trigger
class TriggerCRUD(object):
cls = Trigger
@staticmethod
def get(app_id):
triggers = Trigger.get_by(app_id=app_id)
for trigger in triggers:
trigger['uid'] = json.loads(trigger['uid'] or '[]')
trigger['users'] = [UserCache.get(i).username for i in trigger['uid']]
trigger['roles'] = json.loads(trigger['roles'] or '[]')
trigger['permissions'] = json.loads(trigger['permissions'] or '[]')
return triggers
@staticmethod
def add(app_id, **kwargs):
kwargs.pop('app_id', None)
kwargs['roles'] = json.dumps(kwargs['roles'] or [])
kwargs['permissions'] = json.dumps(kwargs['permissions'] or [])
kwargs['uid'] = json.dumps(kwargs.get('uid') or [])
_kwargs = copy.deepcopy(kwargs)
_kwargs.pop('name', None)
Trigger.get_by(app_id=app_id, **_kwargs) and abort(400, ErrFormat.trigger_exists.format(""))
t = Trigger.create(app_id=app_id, **kwargs)
AuditCRUD.add_trigger_log(app_id, t.id, AuditOperateType.create, {}, t.to_dict(), {})
return t
@staticmethod
def update(_id, **kwargs):
existed = Trigger.get_by_id(_id) or abort(404, ErrFormat.trigger_not_found.format("id={}".format(_id)))
origin = existed.to_dict()
kwargs['roles'] = json.dumps(kwargs['roles'] or [])
kwargs['uid'] = json.dumps(kwargs['uid'] or [])
kwargs['permissions'] = json.dumps(kwargs['permissions'] or [])
existed.update(**kwargs)
AuditCRUD.add_trigger_log(existed.app_id, existed.id, AuditOperateType.update,
origin, existed.to_dict(), {})
return existed
@staticmethod
def delete(_id):
existed = Trigger.get_by_id(_id) or abort(404, ErrFormat.trigger_not_found.format("id={}".format(_id)))
origin = existed.to_dict()
existed.soft_delete()
AuditCRUD.add_trigger_log(existed.app_id, existed.id, AuditOperateType.delete,
origin, {}, {}
)
return existed
@staticmethod
def apply(_id):
trigger = Trigger.get_by_id(_id) or abort(404, ErrFormat.trigger_not_found.format("id={}".format(_id)))
if not trigger.enabled:
return abort(400, ErrFormat.trigger_disabled.format("id={}".format(_id)))
user_id = AuditCRUD.get_current_operate_uid()
apply_trigger.apply_async(args=(_id,), kwargs=dict(operator_uid=user_id), queue=ACL_QUEUE)
@staticmethod
def cancel(_id):
trigger = Trigger.get_by_id(_id) or abort(404, ErrFormat.trigger_not_found.format("id={}".format(_id)))
if not trigger.enabled:
return abort(400, ErrFormat.trigger_disabled.format("id={}".format(_id)))
user_id = AuditCRUD.get_current_operate_uid()
cancel_trigger.apply_async(args=(_id,), kwargs=dict(operator_uid=user_id), queue=ACL_QUEUE)
@staticmethod
def match_triggers(app_id, resource_name, resource_type_id, uid):
triggers = Trigger.get_by(app_id=app_id, enabled=True, resource_type_id=resource_type_id, to_dict=False)
def _fnmatch(name, wildcard):
import re
try:
return re.compile(wildcard).findall(name)
except:
return fnmatch(name, trigger.wildcard)
uid = int(uid) if uid else uid
_match_triggers = []
for trigger in triggers:
uids = json.loads(trigger.uid or '[]')
if trigger.wildcard and uids:
if _fnmatch(resource_name, trigger.wildcard) and uid in uids:
_match_triggers.append(trigger)
elif trigger.wildcard:
if _fnmatch(resource_name, trigger.wildcard):
_match_triggers.append(trigger)
elif uids:
if uid in uids:
_match_triggers.append(trigger)
return _match_triggers
@staticmethod
def get_resources(app_id, resource_type_id, wildcard, uid):
from api.models.acl import Resource
wildcard = wildcard or ''
if wildcard and uid:
query = Resource.get_by(__func_in___key_uid=uid,
app_id=app_id,
resource_type_id=resource_type_id,
only_query=True)
try:
re.compile(wildcard)
resources = query.filter(Resource.name.op('regexp')(wildcard)).all()
except:
resources = query.filter(Resource.name.ilike(wildcard.replace('*', '%'))).all()
elif wildcard:
query = Resource.get_by(app_id=app_id,
resource_type_id=resource_type_id,
only_query=True)
try:
re.compile(wildcard)
resources = query.filter(Resource.name.op('regexp')(wildcard)).all()
except:
resources = query.filter(Resource.name.ilike(wildcard.replace('*', '%'))).all()
elif uid:
resources = Resource.get_by(__func_in___key_uid=uid,
app_id=app_id,
resource_type_id=resource_type_id,
to_dict=False)
else:
resources = []
return resources

View File

@ -0,0 +1,122 @@
# -*- coding:utf-8 -*-
import random
import string
import uuid
from flask import abort
from flask_login import current_user
from api.extensions import db
from api.lib.perm.acl.audit import AuditCRUD
from api.lib.perm.acl.audit import AuditOperateType
from api.lib.perm.acl.audit import AuditScope
from api.lib.perm.acl.cache import UserCache
from api.lib.perm.acl.resp_format import ErrFormat
from api.lib.perm.acl.role import RoleCRUD
from api.models.acl import Role
from api.models.acl import User
class UserCRUD(object):
cls = User
@staticmethod
def search(q, page=1, page_size=None):
query = db.session.query(User).filter(User.deleted.is_(False))
if q:
query = query.filter(User.username.ilike('%{0}%'.format(q)))
numfound = query.count()
return numfound, query.offset((page - 1) * page_size).limit(page_size)
@staticmethod
def gen_key_secret():
key = uuid.uuid4().hex
secret = ''.join(random.sample(string.ascii_letters + string.digits + '~!@#$%^&*?', 32))
return key, secret
@classmethod
def add(cls, **kwargs):
existed = User.get_by(username=kwargs['username'])
existed and abort(400, ErrFormat.user_exists.format(kwargs['username']))
existed = User.get_by(username=kwargs['email'])
existed and abort(400, ErrFormat.user_exists.format(kwargs['email']))
kwargs['nickname'] = kwargs.get('nickname') or kwargs['username']
kwargs['block'] = 0
kwargs['key'], kwargs['secret'] = cls.gen_key_secret()
user_employee = db.session.query(User).filter(User.deleted.is_(False)).order_by(User.employee_id.desc()).first()
biggest_employee_id = int(float(user_employee.employee_id)) if user_employee is not None else 0
kwargs['employee_id'] = '{0:04d}'.format(biggest_employee_id + 1)
user = User.create(**kwargs)
RoleCRUD.add_role(user.username, uid=user.uid)
AuditCRUD.add_role_log(None, AuditOperateType.create,
AuditScope.user, user.uid, {}, user.to_dict(), {}, {}
)
return user
@staticmethod
def update(uid, **kwargs):
user = User.get_by(uid=uid, to_dict=False, first=True) or abort(
404, ErrFormat.user_not_found.format("uid={}".format(uid)))
if kwargs.get("username"):
other = User.get_by(username=kwargs['username'], first=True, to_dict=False)
if other is not None and other.uid != user.uid:
return abort(400, ErrFormat.user_exists.format(kwargs['username']))
UserCache.clean(user)
origin = user.to_dict()
if kwargs.get("username") and kwargs['username'] != user.username:
role = Role.get_by(name=user.username, first=True, to_dict=False)
if role is not None:
RoleCRUD.update_role(role.id, **dict(name=kwargs['username']))
user = user.update(**kwargs)
AuditCRUD.add_role_log(None, AuditOperateType.update,
AuditScope.user, user.uid, origin, user.to_dict(), {}, {}
)
return user
@classmethod
def reset_key_secret(cls):
key, secret = cls.gen_key_secret()
current_user.update(key=key, secret=secret)
UserCache.clean(current_user)
return key, secret
@classmethod
def delete(cls, uid):
user = User.get_by(uid=uid, to_dict=False, first=True) or abort(
404, ErrFormat.user_not_found.format("uid={}".format(uid)))
origin = user.to_dict()
user.delete()
UserCache.clean(user)
role = RoleCRUD.get_by_name(user.username, app_id=None)
if role:
RoleCRUD.delete_role(role[0]['id'], force=True)
AuditCRUD.add_role_log(None, AuditOperateType.delete,
AuditScope.user, user.uid, origin, {}, {}, {})
@staticmethod
def get_employees():
return User.get_by(__func_isnot__key_employee_id=None, to_dict=True)

View File

@ -0,0 +1,229 @@
# -*- coding:utf-8 -*-
from __future__ import unicode_literals
from functools import wraps
import jwt
from flask import abort
from flask import current_app
from flask import request
from flask import session
from flask_login import login_user
from api.lib.perm.acl.acl import ACLManager
from api.lib.perm.acl.acl import is_app_admin
from api.lib.perm.acl.cache import AppCache
from api.lib.perm.acl.cache import UserCache
from api.lib.perm.acl.resp_format import ErrFormat
from api.models.acl import Role
from api.models.acl import User
def reset_session(user, role=None):
from api.lib.perm.acl.acl import ACLManager
if role is not None:
user_info = ACLManager.get_user_info(role)
else:
user_info = ACLManager.get_user_info(user.username)
session["acl"] = dict(uid=user_info.get("uid"),
avatar=user.avatar if user else user_info.get("avatar"),
userId=user_info.get("uid"),
userName=user_info.get("username"),
nickName=user_info.get("nickname"),
parentRoles=user_info.get("parents"),
childRoles=user_info.get("children"),
roleName=user_info.get("role"))
session["uid"] = user_info.get("uuid")
def _auth_with_key():
key = request.values.get('_key')
secret = request.values.get('_secret')
if not key:
return False
path = request.path
keys = sorted(request.values.keys())
req_args = [str(request.values[k]) for k in keys if k not in ("_key", "_secret") and
not isinstance(request.values[k], (dict, list))]
user, authenticated = User.query.authenticate_with_key(key, secret, req_args, path)
if user and authenticated:
login_user(user)
reset_session(user)
return True
role, authenticated = Role.query.authenticate_with_key(key, secret, req_args, path)
if role and authenticated:
reset_session(None, role=role.name)
return True
return False
def _auth_with_session():
if "acl" in session and "userName" in (session["acl"] or {}):
login_user(UserCache.get(session["acl"]["userName"]))
return True
return False
def _auth_with_token():
auth_headers = request.headers.get('Access-Token', '').strip()
if not auth_headers:
return False
try:
token = auth_headers
data = jwt.decode(token, current_app.config['SECRET_KEY'], algorithms=['HS256'])
user = User.query.filter_by(email=data['sub']).first()
if not user:
return False
login_user(user)
reset_session(user)
return True
except jwt.ExpiredSignatureError:
return False
except (jwt.InvalidTokenError, Exception) as e:
current_app.logger.error(str(e))
return False
def _auth_with_ip_white_list():
ip = request.headers.get('X-Real-IP') or request.remote_addr
key = request.values.get('_key')
secret = request.values.get('_secret')
current_app.logger.info(ip)
if not key and not secret and ip.strip() in current_app.config.get("WHITE_LIST", []): # TODO
user = UserCache.get("worker")
login_user(user)
return True
return False
def _auth_with_app_token():
if _auth_with_session() or _auth_with_token():
if not is_app_admin(request.values.get('app_id')) and request.method != "GET":
return False
elif is_app_admin(request.values.get('app_id')):
return True
if _auth_with_key() and is_app_admin('acl'):
return True
auth_headers = request.headers.get('App-Access-Token', '').strip()
if not auth_headers:
return False
try:
token = auth_headers
data = jwt.decode(token, current_app.config['SECRET_KEY'], algorithms=['HS256'])
current_app.logger.warning(data)
app = AppCache.get(data['sub'])
if not app:
return False
request.values['app_id'] = app.id
return True
except jwt.ExpiredSignatureError:
return False
except (jwt.InvalidTokenError, Exception) as e:
current_app.logger.error(str(e))
return False
def _auth_with_acl_token():
token = request.headers.get('Authorization', "")
if not token.startswith('Bearer '):
abort(401, ErrFormat.unauthorized)
_token = token.split(' ')[-1]
result = ACLManager().authenticate_with_token(_token)
if result.get('authenticated') and result.get('user'):
user = User.query.filter_by(email=result.get("user", {}).get("email", "")).first()
login_user(user)
reset_session(user)
return user
elif result.get('authenticated') is False:
abort(401, ErrFormat.unauthorized)
def auth_required(func):
if request.get_json(silent=True) is not None:
setattr(request, 'values', request.json)
else:
setattr(request, 'values', request.values.to_dict())
@wraps(func)
def wrapper(*args, **kwargs):
if not getattr(func, 'authenticated', True):
return func(*args, **kwargs)
if getattr(func, 'auth_only_with_app_token', False) and _auth_with_app_token():
return func(*args, **kwargs)
elif getattr(func, 'auth_only_with_app_token', False):
if _auth_with_key() and is_app_admin('acl'):
return func(*args, **kwargs)
if request.headers.get('App-Access-Token', '').strip():
return abort(403, ErrFormat.auth_only_with_app_token_failed)
else:
return abort(403, ErrFormat.session_invalid)
if getattr(func, 'auth_with_app_token', False) and _auth_with_app_token():
return func(*args, **kwargs)
elif _auth_with_session() or _auth_with_key() or _auth_with_token() or _auth_with_ip_white_list():
return func(*args, **kwargs)
if _auth_with_acl_token():
return func(*args, **kwargs)
return abort(401, ErrFormat.unauthorized)
return wrapper
def auth_abandoned(func):
setattr(func, "authenticated", False)
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
def auth_with_app_token(func):
setattr(func, 'auth_with_app_token', True)
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
def auth_only_for_acl(func):
setattr(func, 'auth_only_with_app_token', True)
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
def auth_with_acl_token(func):
setattr(func, 'auth_with_acl_token', True)
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper()

View File

@ -0,0 +1,29 @@
# -*- coding:utf-8 -*-
class CommonErrFormat(object):
unauthorized = "未认证"
unknown_error = "未知错误"
invalid_request = "不合法的请求"
invalid_operation = "无效的操作"
not_found = "不存在"
circular_dependency_error = "存在循环依赖!"
unknown_search_error = "未知搜索错误"
invalid_json = "json格式似乎不正确了, 请仔细确认一下!"
datetime_argument_invalid = "参数 {} 格式不正确, 格式必须是: yyyy-mm-dd HH:MM:SS"
argument_value_required = "参数 {} 的值不能为空!"
argument_required = "请求缺少参数 {}"
argument_invalid = "参数 {} 的值无效"
argument_str_length_limit = "参数 {} 的长度必须 <= {}"
role_required = "角色 {} 才能操作!"
user_not_found = "用户 {} 不存在"
no_permission = "您没有资源: {}{}权限!"
no_permission2 = "您没有操作权限!"
no_permission_only_owner = "只有创建人或者管理员才有权限!"

288
cmdb-api/api/lib/utils.py Normal file
View File

@ -0,0 +1,288 @@
# -*- coding:utf-8 -*-
import base64
import sys
import time
from typing import Set
import elasticsearch
import redis
import six
from Crypto.Cipher import AES
from elasticsearch import Elasticsearch
from flask import current_app
class BaseEnum(object):
_ALL_ = set() # type: Set[str]
@classmethod
def is_valid(cls, item):
return item in cls.all()
@classmethod
def all(cls):
if not cls._ALL_:
cls._ALL_ = {
getattr(cls, attr)
for attr in dir(cls)
if not attr.startswith("_") and not callable(getattr(cls, attr))
}
return cls._ALL_
def get_page(page):
try:
page = int(page)
except (TypeError, ValueError):
page = 1
return page if page >= 1 else 1
def get_page_size(page_size):
if page_size == "all":
return page_size
try:
page_size = int(page_size)
except (ValueError, TypeError):
page_size = current_app.config.get("DEFAULT_PAGE_COUNT")
return page_size if page_size >= 1 else current_app.config.get("DEFAULT_PAGE_COUNT")
def handle_bool_arg(arg):
if arg in current_app.config.get("BOOL_TRUE"):
return True
return False
def handle_arg_list(arg):
if isinstance(arg, (list, dict)):
return arg
if arg == 0:
return [0]
if not arg:
return []
if isinstance(arg, (six.integer_types, float)):
return [arg]
return list(filter(lambda x: x != "", arg.strip().split(","))) if isinstance(arg, six.string_types) else arg
class RedisHandler(object):
def __init__(self, flask_app=None):
self.flask_app = flask_app
self.r = None
def init_app(self, app):
self.flask_app = app
config = self.flask_app.config
try:
pool = redis.ConnectionPool(
max_connections=config.get("REDIS_MAX_CONN"),
host=config.get("CACHE_REDIS_HOST"),
port=config.get("CACHE_REDIS_PORT"),
password=config.get("CACHE_REDIS_PASSWORD"),
db=config.get("REDIS_DB") or 0)
self.r = redis.Redis(connection_pool=pool)
except Exception as e:
current_app.logger.warning(str(e))
current_app.logger.error("init redis connection failed")
def get(self, key_ids, prefix):
try:
value = self.r.hmget(prefix, key_ids)
except Exception as e:
current_app.logger.error("get redis error, {0}".format(str(e)))
return
return value
def _set(self, obj, prefix):
try:
self.r.hmset(prefix, obj)
except Exception as e:
current_app.logger.error("set redis error, {0}".format(str(e)))
def create_or_update(self, obj, prefix):
self._set(obj, prefix)
def delete(self, key_id, prefix):
try:
ret = self.r.hdel(prefix, key_id)
if not ret:
current_app.logger.warning("[{0}] is not in redis".format(key_id))
except Exception as e:
current_app.logger.error("delete redis key error, {0}".format(str(e)))
class ESHandler(object):
def __init__(self, flask_app=None):
self.flask_app = flask_app
self.es = None
self.index = "cmdb"
def init_app(self, app):
self.flask_app = app
config = self.flask_app.config
if config.get('ES_USER') and config.get('ES_PASSWORD'):
uri = "http://{}:{}@{}:{}/".format(config.get('ES_USER'), config.get('ES_PASSWORD'),
config.get('ES_HOST'), config.get('ES_PORT'))
else:
uri = "{}:{}".format(config.get('ES_HOST'), config.get('ES_PORT') or 9200)
self.es = Elasticsearch(uri,
timeout=10,
max_retries=3,
retry_on_timeout=True,
retry_on_status=(502, 503, 504, "N/A"),
maxsize=10)
try:
if not self.es.indices.exists(index=self.index):
self.es.indices.create(index=self.index)
except elasticsearch.exceptions.RequestError as ex:
if ex.error != 'resource_already_exists_exception':
raise
def update_mapping(self, field, value_type, other):
body = {
"properties": {
field: {"type": value_type},
}}
body['properties'][field].update(other)
self.es.indices.put_mapping(
index=self.index,
body=body
)
def get_index_id(self, ci_id):
try:
return self._get_index_id(ci_id)
except:
return self._get_index_id(ci_id)
def _get_index_id(self, ci_id):
query = {
'query': {
'match': {'ci_id': ci_id}
},
}
res = self.es.search(index=self.index, body=query)
if res['hits']['hits']:
return res['hits']['hits'][-1].get('_id')
def create(self, body):
return self.es.index(index=self.index, body=body).get("_id")
def update(self, ci_id, body):
_id = self.get_index_id(ci_id)
if _id:
return self.es.index(index=self.index, id=_id, body=body).get("_id")
def create_or_update(self, ci_id, body):
try:
self.update(ci_id, body) or self.create(body)
except KeyError:
self.create(body)
def delete(self, ci_id):
try:
_id = self.get_index_id(ci_id)
except KeyError:
return
if _id:
self.es.delete(index=self.index, id=_id)
def read(self, query, filter_path=None):
filter_path = filter_path or []
if filter_path:
filter_path.append('hits.total')
res = self.es.search(index=self.index, body=query, filter_path=filter_path)
if res['hits'].get('hits'):
return (res['hits']['total']['value'],
[i['_source'] for i in res['hits']['hits']],
res.get("aggregations", {}))
else:
return 0, [], {}
class Lock(object):
def __init__(self, name, timeout=10, app=None, need_lock=True):
self.lock_key = name
self.need_lock = need_lock
self.timeout = timeout
if not app:
app = current_app
self.app = app
try:
self.redis = redis.Redis(host=self.app.config.get('CACHE_REDIS_HOST'),
port=self.app.config.get('CACHE_REDIS_PORT'),
password=self.app.config.get('CACHE_REDIS_PASSWORD'))
except:
self.app.logger.error("cannot connect redis")
raise Exception("cannot connect redis")
def lock(self, timeout=None):
if not timeout:
timeout = self.timeout
retry = 0
while retry < 100:
timestamp = time.time() + timeout + 1
_lock = self.redis.setnx(self.lock_key, timestamp)
if _lock == 1 or (
time.time() > float(self.redis.get(self.lock_key) or sys.maxsize) and
time.time() > float(self.redis.getset(self.lock_key, timestamp) or sys.maxsize)):
break
else:
retry += 1
time.sleep(0.6)
if retry >= 100:
raise Exception("get lock failed...")
def release(self):
if time.time() < float(self.redis.get(self.lock_key)):
self.redis.delete(self.lock_key)
def __enter__(self):
if self.need_lock:
self.lock()
def __exit__(self, exc_type, exc_val, exc_tb):
if self.need_lock:
self.release()
class AESCrypto(object):
BLOCK_SIZE = 16 # Bytes
pad = lambda s: s + ((AESCrypto.BLOCK_SIZE - len(s) % AESCrypto.BLOCK_SIZE) *
chr(AESCrypto.BLOCK_SIZE - len(s) % AESCrypto.BLOCK_SIZE))
unpad = lambda s: s[:-ord(s[len(s) - 1:])]
iv = '0102030405060708'
@staticmethod
def key():
key = current_app.config.get("SECRET_KEY")[:16]
if len(key) < 16:
key = "{}{}".format(key, (16 - len(key)) * "x")
return key.encode('utf8')
@classmethod
def encrypt(cls, data):
data = cls.pad(data)
cipher = AES.new(cls.key(), AES.MODE_CBC, cls.iv.encode('utf8'))
return base64.b64encode(cipher.encrypt(data.encode('utf8'))).decode('utf8')
@classmethod
def decrypt(cls, data):
encode_bytes = base64.decodebytes(data.encode('utf8'))
cipher = AES.new(cls.key(), AES.MODE_CBC, cls.iv.encode('utf8'))
text_decrypted = cipher.decrypt(encode_bytes)
return cls.unpad(text_decrypted).decode('utf8')

109
cmdb-api/api/lib/webhook.py Normal file
View File

@ -0,0 +1,109 @@
# -*- coding:utf-8 -*-
import json
from functools import partial
import requests
from jinja2 import Template
from requests.auth import HTTPBasicAuth
from requests_oauthlib import OAuth2Session
class BearerAuth(requests.auth.AuthBase):
def __init__(self, token):
self.token = token
def __call__(self, r):
r.headers["authorization"] = "Bearer {}".format(self.token)
return r
def _wrap_auth(**kwargs):
auth_type = (kwargs.get('type') or "").lower()
if auth_type == "basicauth":
return HTTPBasicAuth(kwargs.get('username'), kwargs.get('password'))
elif auth_type == "bearer":
return BearerAuth(kwargs.get('token'))
elif auth_type == 'oauth2.0':
client_id = kwargs.get('client_id')
client_secret = kwargs.get('client_secret')
authorization_base_url = kwargs.get('authorization_base_url')
token_url = kwargs.get('token_url')
redirect_url = kwargs.get('redirect_url')
scope = kwargs.get('scope')
oauth2_session = OAuth2Session(client_id, scope=scope or None)
oauth2_session.authorization_url(authorization_base_url)
oauth2_session.fetch_token(token_url, client_secret=client_secret, authorization_response=redirect_url)
return oauth2_session
elif auth_type == "apikey":
return HTTPBasicAuth(kwargs.get('key'), kwargs.get('value'))
def webhook_request(webhook, payload):
"""
:param webhook:
{
"url": "https://veops.cn"
"method": "GET|POST|PUT|DELETE"
"body": {},
"headers": {
"Content-Type": "Application/json"
},
"parameters": {
"key": "value"
},
"authorization": {
"type": "BasicAuth|Bearer|OAuth2.0|APIKey",
"password": "mmmm", # BasicAuth
"username": "bbb", # BasicAuth
"token": "xxx", # Bearer
"key": "xxx", # APIKey
"value": "xxx", # APIKey
"client_id": "xxx", # OAuth2.0
"client_secret": "xxx", # OAuth2.0
"authorization_base_url": "xxx", # OAuth2.0
"token_url": "xxx", # OAuth2.0
"redirect_url": "xxx", # OAuth2.0
"scope": "xxx" # OAuth2.0
}
}
:param payload:
:return:
"""
assert webhook.get('url') is not None
payload = {k: v or '' for k, v in payload.items()}
url = Template(webhook['url']).render(payload)
params = webhook.get('parameters') or None
if isinstance(params, dict):
params = json.loads(Template(json.dumps(params)).render(payload))
headers = json.loads(Template(json.dumps(webhook.get('headers') or {})).render(payload))
data = Template(json.dumps(webhook.get('body', ''))).render(payload)
auth = _wrap_auth(**webhook.get('authorization', {}))
if (webhook.get('authorization', {}).get("type") or '').lower() == 'oauth2.0':
request = getattr(auth, webhook.get('method', 'GET').lower())
else:
request = partial(requests.request, webhook.get('method', 'GET'))
return request(
url,
params=params,
headers=headers or None,
data=data,
auth=auth
)

View File

@ -0,0 +1,5 @@
# -*- coding:utf-8 -*-
from .cmdb import *
from .acl import *

377
cmdb-api/api/models/acl.py Normal file
View File

@ -0,0 +1,377 @@
# -*- coding:utf-8 -*-
import copy
import hashlib
from datetime import datetime
import ldap
from flask import current_app
from flask_sqlalchemy import BaseQuery
from api.extensions import db
from api.lib.database import CRUDModel
from api.lib.database import Model
from api.lib.database import SoftDeleteMixin
from api.lib.perm.acl.const import ACL_QUEUE
from api.lib.perm.acl.const import OperateType
class App(Model):
__tablename__ = "acl_apps"
name = db.Column(db.String(64), index=True)
description = db.Column(db.Text)
app_id = db.Column(db.Text)
secret_key = db.Column(db.Text)
class UserQuery(BaseQuery):
def _join(self, *args, **kwargs):
super(UserQuery, self)._join(*args, **kwargs)
def authenticate(self, login, password):
user = self.filter(db.or_(User.username == login,
User.email == login)).filter(User.deleted.is_(False)).filter(User.block == 0).first()
if user:
current_app.logger.info(user)
authenticated = user.check_password(password)
if authenticated:
from api.tasks.acl import op_record
op_record.apply_async(args=(None, login, OperateType.LOGIN, ["ACL"]), queue=ACL_QUEUE)
else:
authenticated = False
return user, authenticated
def authenticate_with_key(self, key, secret, args, path):
user = self.filter(User.key == key).filter(User.deleted.is_(False)).filter(User.block == 0).first()
if not user:
return None, False
if user and hashlib.sha1('{0}{1}{2}'.format(
path, user.secret, "".join(args)).encode("utf-8")).hexdigest() == secret:
authenticated = True
else:
authenticated = False
return user, authenticated
def authenticate_with_ldap(self, username, password):
ldap_conn = ldap.initialize(current_app.config.get('LDAP_SERVER'))
ldap_conn.protocol_version = 3
ldap_conn.set_option(ldap.OPT_REFERRALS, 0)
if '@' in username:
email = username
who = current_app.config.get('LDAP_USER_DN').format(username.split('@')[0])
else:
who = current_app.config.get('LDAP_USER_DN').format(username)
email = "{}@{}".format(who, current_app.config.get('LDAP_DOMAIN'))
username = username.split('@')[0]
user = self.get_by_username(username)
try:
if not password:
raise ldap.INVALID_CREDENTIALS
ldap_conn.simple_bind_s(who, password)
if not user:
from api.lib.perm.acl.user import UserCRUD
user = UserCRUD.add(username=username, email=email)
from api.tasks.acl import op_record
op_record.apply_async(args=(None, username, OperateType.LOGIN, ["ACL"]), queue=ACL_QUEUE)
return user, True
except ldap.INVALID_CREDENTIALS:
return user, False
def search(self, key):
query = self.filter(db.or_(User.email == key,
User.nickname.ilike('%' + key + '%'),
User.username.ilike('%' + key + '%')))
return query
def get_by_username(self, username):
user = self.filter(User.username == username).first()
return user
def get_by_nickname(self, nickname):
user = self.filter(User.nickname == nickname).first()
return user
def get_by_wxid(self, wx_id):
user = self.filter(User.wx_id == wx_id).first()
return user
def get(self, uid):
user = self.filter(User.uid == uid).first()
return copy.deepcopy(user)
class User(CRUDModel, SoftDeleteMixin):
__tablename__ = 'users'
__bind_key__ = "user"
query_class = UserQuery
uid = db.Column(db.Integer, primary_key=True, autoincrement=True)
username = db.Column(db.String(32), unique=True)
nickname = db.Column(db.String(20), nullable=True)
department = db.Column(db.String(20))
catalog = db.Column(db.String(64))
email = db.Column(db.String(100), unique=True, nullable=False)
mobile = db.Column(db.String(14), unique=True)
_password = db.Column("password", db.String(80))
key = db.Column(db.String(32), nullable=False)
secret = db.Column(db.String(32), nullable=False)
date_joined = db.Column(db.DateTime, default=datetime.utcnow)
last_login = db.Column(db.DateTime, default=datetime.utcnow)
block = db.Column(db.Boolean, default=False)
has_logined = db.Column(db.Boolean, default=False)
wx_id = db.Column(db.String(32))
employee_id = db.Column(db.String(16), index=True)
avatar = db.Column(db.String(128))
# apps = db.Column(db.JSON)
def __str__(self):
return self.username
def is_active(self):
return not self.block
def get_id(self):
return self.uid
@staticmethod
def is_authenticated():
return True
def _get_password(self):
return self._password
def _set_password(self, password):
self._password = hashlib.md5(password.encode('utf-8')).hexdigest()
password = db.synonym("_password", descriptor=property(_get_password, _set_password))
def check_password(self, password):
if self.password is None:
return False
return self.password == password or self.password == hashlib.md5(password.encode('utf-8')).hexdigest()
class RoleQuery(BaseQuery):
def _join(self, *args, **kwargs):
super(RoleQuery, self)._join(*args, **kwargs)
def authenticate(self, login, password):
role = self.filter(Role.name == login).first()
if role:
authenticated = role.check_password(password)
if authenticated:
from api.tasks.acl import op_record
op_record.apply_async(args=(None, login, OperateType.LOGIN, ["ACL"]), queue=ACL_QUEUE)
else:
authenticated = False
return role, authenticated
def authenticate_with_key(self, key, secret, args, path):
role = self.filter(Role.key == key).filter(Role.deleted.is_(False)).first()
if not role:
return None, False
if role and hashlib.sha1('{0}{1}{2}'.format(
path, role.secret, "".join(args)).encode("utf-8")).hexdigest() == secret:
authenticated = True
else:
authenticated = False
return role, authenticated
class Role(Model):
__tablename__ = "acl_roles"
query_class = RoleQuery
name = db.Column(db.String(64), index=True, nullable=False)
is_app_admin = db.Column(db.Boolean, default=False)
app_id = db.Column(db.Integer, db.ForeignKey("acl_apps.id"))
uid = db.Column(db.Integer)
_password = db.Column("password", db.String(80))
key = db.Column(db.String(32))
secret = db.Column(db.String(32))
def _get_password(self):
return self._password
def _set_password(self, password):
if password:
self._password = hashlib.md5(password.encode('utf-8')).hexdigest()
password = db.synonym("_password", descriptor=property(_get_password, _set_password))
def check_password(self, password):
if self.password is None:
return False
return self.password == password or self.password == hashlib.md5(password.encode('utf-8')).hexdigest()
class RoleRelation(Model):
__tablename__ = "acl_role_relations"
parent_id = db.Column(db.Integer, db.ForeignKey('acl_roles.id'))
child_id = db.Column(db.Integer, db.ForeignKey('acl_roles.id'))
app_id = db.Column(db.Integer, db.ForeignKey('acl_apps.id'))
class ResourceType(Model):
__tablename__ = "acl_resource_types"
name = db.Column(db.String(64), index=True)
description = db.Column(db.Text)
app_id = db.Column(db.Integer, db.ForeignKey('acl_apps.id'))
class ResourceGroup(Model):
__tablename__ = "acl_resource_groups"
name = db.Column(db.String(64), index=True, nullable=False)
resource_type_id = db.Column(db.Integer, db.ForeignKey("acl_resource_types.id"))
uid = db.Column(db.Integer, index=True)
app_id = db.Column(db.Integer, db.ForeignKey('acl_apps.id'))
resource_type = db.relationship("ResourceType", backref='acl_resource_groups.resource_type_id')
class Resource(Model):
__tablename__ = "acl_resources"
name = db.Column(db.String(128), nullable=False)
resource_type_id = db.Column(db.Integer, db.ForeignKey("acl_resource_types.id"))
uid = db.Column(db.Integer, index=True)
app_id = db.Column(db.Integer, db.ForeignKey("acl_apps.id"))
resource_type = db.relationship("ResourceType", backref='acl_resources.resource_type_id')
class ResourceGroupItems(Model):
__tablename__ = "acl_resource_group_items"
group_id = db.Column(db.Integer, db.ForeignKey('acl_resource_groups.id'), nullable=False)
resource_id = db.Column(db.Integer, db.ForeignKey('acl_resources.id'), nullable=False)
resource = db.relationship("Resource", backref='acl_resource_group_items.resource_id')
class Permission(Model):
__tablename__ = "acl_permissions"
name = db.Column(db.String(64), nullable=False)
resource_type_id = db.Column(db.Integer, db.ForeignKey("acl_resource_types.id"))
app_id = db.Column(db.Integer, db.ForeignKey("acl_apps.id"))
class RolePermission(Model):
__tablename__ = "acl_role_permissions"
rid = db.Column(db.Integer, db.ForeignKey('acl_roles.id'))
resource_id = db.Column(db.Integer, db.ForeignKey('acl_resources.id'))
group_id = db.Column(db.Integer, db.ForeignKey('acl_resource_groups.id'))
perm_id = db.Column(db.Integer, db.ForeignKey('acl_permissions.id'))
app_id = db.Column(db.Integer, db.ForeignKey("acl_apps.id"))
perm = db.relationship("Permission", backref='acl_role_permissions.perm_id')
class Trigger(Model):
__tablename__ = "acl_triggers"
name = db.Column(db.String(128))
wildcard = db.Column(db.Text)
uid = db.Column(db.Text) # TODO
resource_type_id = db.Column(db.Integer, db.ForeignKey('acl_resource_types.id'))
roles = db.Column(db.Text) # TODO
permissions = db.Column(db.Text) # TODO
enabled = db.Column(db.Boolean, default=True)
app_id = db.Column(db.Integer, db.ForeignKey('acl_apps.id'))
class OperationRecord(Model):
__tablename__ = "acl_operation_records"
app = db.Column(db.String(32), index=True)
rolename = db.Column(db.String(32), index=True)
operate = db.Column(db.Enum(*OperateType.all()), nullable=False)
obj = db.Column(db.JSON)
class AuditRoleLog(Model):
__tablename__ = "acl_audit_role_logs"
app_id = db.Column(db.Integer, index=True)
operate_uid = db.Column(db.Integer, comment='操作人uid', index=True)
operate_type = db.Column(db.String(32), comment='操作类型', index=True)
scope = db.Column(db.String(16), comment='范围')
link_id = db.Column(db.Integer, comment='资源id', index=True)
origin = db.Column(db.JSON, default=dict(), comment='原始数据')
current = db.Column(db.JSON, default=dict(), comment='当前数据')
extra = db.Column(db.JSON, default=dict(), comment='其他内容')
source = db.Column(db.String(16), default='', comment='来源')
class AuditResourceLog(Model):
__tablename__ = "acl_audit_resource_logs"
app_id = db.Column(db.Integer, index=True)
operate_uid = db.Column(db.Integer, comment='操作人uid', index=True)
operate_type = db.Column(db.String(16), comment='操作类型', index=True)
scope = db.Column(db.String(16), comment='范围')
link_id = db.Column(db.Integer, comment='资源名', index=True)
origin = db.Column(db.JSON, default=dict(), comment='原始数据')
current = db.Column(db.JSON, default=dict(), comment='当前数据')
extra = db.Column(db.JSON, default=dict(), comment='权限名')
source = db.Column(db.String(16), default='', comment='来源')
class AuditPermissionLog(Model):
__tablename__ = "acl_audit_permission_logs"
app_id = db.Column(db.Integer, index=True)
operate_uid = db.Column(db.Integer, comment='操作人uid', index=True)
operate_type = db.Column(db.String(16), comment='操作类型', index=True)
rid = db.Column(db.Integer, comment='角色id', index=True)
resource_type_id = db.Column(db.Integer, comment='资源类型id', index=True)
resource_ids = db.Column(db.JSON, default=[], comment='资源')
group_ids = db.Column(db.JSON, default=[], comment='资源组')
permission_ids = db.Column(db.JSON, default=[], comment='权限')
source = db.Column(db.String(16), comment='来源')
class AuditTriggerLog(Model):
__tablename__ = "acl_audit_trigger_logs"
app_id = db.Column(db.Integer, index=True)
trigger_id = db.Column(db.Integer, comment='trigger', index=True)
operate_uid = db.Column(db.Integer, comment='操作人uid', index=True)
operate_type = db.Column(db.String(16), comment='操作类型', index=True)
origin = db.Column(db.JSON, default=dict(), comment='原始数据')
current = db.Column(db.JSON, default=dict(), comment='当前数据')
extra = db.Column(db.JSON, default=dict(), comment='权限名')
source = db.Column(db.String(16), default='', comment='来源')

506
cmdb-api/api/models/cmdb.py Normal file
View File

@ -0,0 +1,506 @@
# -*- coding:utf-8 -*-
import datetime
from sqlalchemy.dialects.mysql import DOUBLE
from api.extensions import db
from api.lib.cmdb.const import AutoDiscoveryType
from api.lib.cmdb.const import CIStatusEnum
from api.lib.cmdb.const import CITypeOperateType
from api.lib.cmdb.const import ConstraintEnum
from api.lib.cmdb.const import OperateType
from api.lib.cmdb.const import ValueTypeEnum
from api.lib.database import Model, Model2
# template
class RelationType(Model):
__tablename__ = "c_relation_types"
name = db.Column(db.String(16), index=True, nullable=False)
class CITypeGroup(Model):
__tablename__ = "c_ci_type_groups"
name = db.Column(db.String(32), nullable=False)
order = db.Column(db.Integer, default=0)
class CITypeGroupItem(Model):
__tablename__ = "c_ci_type_group_items"
group_id = db.Column(db.Integer, db.ForeignKey("c_ci_type_groups.id"), nullable=False)
type_id = db.Column(db.Integer, db.ForeignKey("c_ci_types.id"), nullable=False)
order = db.Column(db.SmallInteger, default=0)
class CIType(Model):
__tablename__ = "c_ci_types"
name = db.Column(db.String(32), nullable=False)
alias = db.Column(db.String(32), nullable=False)
unique_id = db.Column(db.Integer, db.ForeignKey("c_attributes.id"), nullable=False)
enabled = db.Column(db.Boolean, default=True, nullable=False)
is_attached = db.Column(db.Boolean, default=False, nullable=False)
icon = db.Column(db.Text)
order = db.Column(db.SmallInteger, default=0, nullable=False)
default_order_attr = db.Column(db.String(33))
unique_key = db.relationship("Attribute", backref="c_ci_types.unique_id")
uid = db.Column(db.Integer, index=True)
class CITypeRelation(Model):
__tablename__ = "c_ci_type_relations"
parent_id = db.Column(db.Integer, db.ForeignKey("c_ci_types.id"), nullable=False) # source
child_id = db.Column(db.Integer, db.ForeignKey("c_ci_types.id"), nullable=False) # dst
relation_type_id = db.Column(db.Integer, db.ForeignKey("c_relation_types.id"), nullable=False)
constraint = db.Column(db.Enum(*ConstraintEnum.all()), default=ConstraintEnum.One2Many)
parent = db.relationship("CIType", primaryjoin="CIType.id==CITypeRelation.parent_id")
child = db.relationship("CIType", primaryjoin="CIType.id==CITypeRelation.child_id")
relation_type = db.relationship("RelationType", backref="c_ci_type_relations.relation_type_id")
class Attribute(Model):
__tablename__ = "c_attributes"
name = db.Column(db.String(32), nullable=False)
alias = db.Column(db.String(32), nullable=False)
value_type = db.Column(db.Enum(*ValueTypeEnum.all()), default=ValueTypeEnum.TEXT, nullable=False)
is_choice = db.Column(db.Boolean, default=False)
is_list = db.Column(db.Boolean, default=False)
is_unique = db.Column(db.Boolean, default=False)
is_index = db.Column(db.Boolean, default=False)
is_link = db.Column(db.Boolean, default=False)
is_password = db.Column(db.Boolean, default=False)
is_sortable = db.Column(db.Boolean, default=False)
default = db.Column(db.JSON) # {"default": None}
is_computed = db.Column(db.Boolean, default=False)
compute_expr = db.Column(db.Text)
compute_script = db.Column(db.Text)
choice_web_hook = db.Column(db.JSON)
choice_other = db.Column(db.JSON)
uid = db.Column(db.Integer, index=True)
option = db.Column(db.JSON)
class CITypeAttribute(Model):
__tablename__ = "c_ci_type_attributes"
type_id = db.Column(db.Integer, db.ForeignKey("c_ci_types.id"), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey("c_attributes.id"), nullable=False)
order = db.Column(db.Integer, default=0)
is_required = db.Column(db.Boolean, default=False)
default_show = db.Column(db.Boolean, default=True)
attr = db.relationship("Attribute", backref="c_ci_type_attributes.attr_id")
class CITypeAttributeGroup(Model):
__tablename__ = "c_ci_type_attribute_groups"
name = db.Column(db.String(64), nullable=False)
type_id = db.Column(db.Integer, db.ForeignKey("c_ci_types.id"), nullable=False)
order = db.Column(db.SmallInteger, default=0)
class CITypeAttributeGroupItem(Model):
__tablename__ = "c_ci_type_attribute_group_items"
group_id = db.Column(db.Integer, db.ForeignKey("c_ci_type_attribute_groups.id"), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey("c_attributes.id"), nullable=False)
order = db.Column(db.SmallInteger, default=0)
class CITypeTrigger(Model):
__tablename__ = "c_c_t_t"
type_id = db.Column(db.Integer, db.ForeignKey('c_ci_types.id'), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey("c_attributes.id"))
option = db.Column('notify', db.JSON)
class CITriggerHistory(Model):
__tablename__ = "c_ci_trigger_histories"
operate_type = db.Column(db.Enum(*OperateType.all(), name="operate_type"))
record_id = db.Column(db.Integer, db.ForeignKey("c_records.id"))
ci_id = db.Column(db.Integer, index=True, nullable=False)
trigger_id = db.Column(db.Integer, db.ForeignKey("c_c_t_t.id"))
trigger_name = db.Column(db.String(64))
is_ok = db.Column(db.Boolean, default=False)
notify = db.Column(db.Text)
webhook = db.Column(db.Text)
class CITypeUniqueConstraint(Model):
__tablename__ = "c_c_t_u_c"
type_id = db.Column(db.Integer, db.ForeignKey('c_ci_types.id'), nullable=False)
attr_ids = db.Column(db.JSON) # [attr_id, ]
# instance
class CI(Model):
__tablename__ = "c_cis"
type_id = db.Column(db.Integer, db.ForeignKey("c_ci_types.id"), nullable=False)
status = db.Column(db.Enum(*CIStatusEnum.all(), name="status"))
heartbeat = db.Column(db.DateTime, default=lambda: datetime.datetime.now())
is_auto_discovery = db.Column('a', db.Boolean, default=False)
ci_type = db.relationship("CIType", backref="c_cis.type_id")
class CIRelation(Model):
__tablename__ = "c_ci_relations"
first_ci_id = db.Column(db.Integer, db.ForeignKey("c_cis.id"), nullable=False)
second_ci_id = db.Column(db.Integer, db.ForeignKey("c_cis.id"), nullable=False)
relation_type_id = db.Column(db.Integer, db.ForeignKey("c_relation_types.id"), nullable=False)
more = db.Column(db.Integer, db.ForeignKey("c_cis.id"))
first_ci = db.relationship("CI", primaryjoin="CI.id==CIRelation.first_ci_id")
second_ci = db.relationship("CI", primaryjoin="CI.id==CIRelation.second_ci_id")
relation_type = db.relationship("RelationType", backref="c_ci_relations.relation_type_id")
class IntegerChoice(Model):
__tablename__ = 'c_choice_integers'
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(db.Integer, nullable=False)
option = db.Column(db.JSON)
attr = db.relationship("Attribute", backref="c_choice_integers.attr_id")
class FloatChoice(Model):
__tablename__ = 'c_choice_floats'
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(DOUBLE, nullable=False)
option = db.Column(db.JSON)
attr = db.relationship("Attribute", backref="c_choice_floats.attr_id")
class TextChoice(Model):
__tablename__ = 'c_choice_texts'
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(db.Text, nullable=False)
option = db.Column(db.JSON)
attr = db.relationship("Attribute", backref="c_choice_texts.attr_id")
class CIIndexValueInteger(Model):
__tablename__ = "c_value_index_integers"
ci_id = db.Column(db.Integer, db.ForeignKey('c_cis.id'), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(db.Integer, nullable=False)
ci = db.relationship("CI", backref="c_value_index_integers.ci_id")
attr = db.relationship("Attribute", backref="c_value_index_integers.attr_id")
__table_args__ = (db.Index("integer_attr_value_index", "attr_id", "value"),)
class CIIndexValueFloat(Model):
__tablename__ = "c_value_index_floats"
ci_id = db.Column(db.Integer, db.ForeignKey('c_cis.id'), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(DOUBLE, nullable=False)
ci = db.relationship("CI", backref="c_value_index_floats.ci_id")
attr = db.relationship("Attribute", backref="c_value_index_floats.attr_id")
__table_args__ = (db.Index("float_attr_value_index", "attr_id", "value"),)
class CIIndexValueText(Model):
__tablename__ = "c_value_index_texts"
ci_id = db.Column(db.Integer, db.ForeignKey('c_cis.id'), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(db.String(128), nullable=False)
ci = db.relationship("CI", backref="c_value_index_texts.ci_id")
attr = db.relationship("Attribute", backref="c_value_index_texts.attr_id")
__table_args__ = (db.Index("text_attr_value_index", "attr_id", "value"),)
class CIIndexValueDateTime(Model):
__tablename__ = "c_value_index_datetime"
ci_id = db.Column(db.Integer, db.ForeignKey('c_cis.id'), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(db.DateTime, nullable=False)
ci = db.relationship("CI", backref="c_value_index_datetime.ci_id")
attr = db.relationship("Attribute", backref="c_value_index_datetime.attr_id")
__table_args__ = (db.Index("datetime_attr_value_index", "attr_id", "value"),)
class CIValueInteger(Model):
"""
Deprecated in a future version
"""
__tablename__ = "c_value_integers"
ci_id = db.Column(db.Integer, db.ForeignKey('c_cis.id'), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(db.Integer, nullable=False)
ci = db.relationship("CI", backref="c_value_integers.ci_id")
attr = db.relationship("Attribute", backref="c_value_integers.attr_id")
class CIValueFloat(Model):
"""
Deprecated in a future version
"""
__tablename__ = "c_value_floats"
ci_id = db.Column(db.Integer, db.ForeignKey('c_cis.id'), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(DOUBLE, nullable=False)
ci = db.relationship("CI", backref="c_value_floats.ci_id")
attr = db.relationship("Attribute", backref="c_value_floats.attr_id")
class CIValueText(Model):
__tablename__ = "c_value_texts"
ci_id = db.Column(db.Integer, db.ForeignKey('c_cis.id'), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(db.Text, nullable=False)
ci = db.relationship("CI", backref="c_value_texts.ci_id")
attr = db.relationship("Attribute", backref="c_value_texts.attr_id")
class CIValueDateTime(Model):
"""
Deprecated in a future version
"""
__tablename__ = "c_value_datetime"
ci_id = db.Column(db.Integer, db.ForeignKey('c_cis.id'), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(db.DateTime, nullable=False)
ci = db.relationship("CI", backref="c_value_datetime.ci_id")
attr = db.relationship("Attribute", backref="c_value_datetime.attr_id")
class CIValueJson(Model):
__tablename__ = "c_value_json"
ci_id = db.Column(db.Integer, db.ForeignKey('c_cis.id'), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'), nullable=False)
value = db.Column(db.JSON, nullable=False)
ci = db.relationship("CI", backref="c_value_json.ci_id")
attr = db.relationship("Attribute", backref="c_value_json.attr_id")
# history
class OperationRecord(Model2):
__tablename__ = "c_records"
uid = db.Column(db.Integer, index=True, nullable=False)
origin = db.Column(db.String(32), nullable=True)
ticket_id = db.Column(db.String(32), nullable=True)
reason = db.Column(db.Text)
type_id = db.Column(db.Integer, index=True)
class AttributeHistory(Model):
__tablename__ = "c_attribute_histories"
operate_type = db.Column(db.Enum(*OperateType.all(), name="operate_type"))
record_id = db.Column(db.Integer, db.ForeignKey("c_records.id"), nullable=False)
ci_id = db.Column(db.Integer, index=True, nullable=False)
attr_id = db.Column(db.Integer, index=True)
old = db.Column(db.Text)
new = db.Column(db.Text)
class CIRelationHistory(Model):
__tablename__ = "c_relation_histories"
operate_type = db.Column(db.Enum(OperateType.ADD, OperateType.DELETE, name="operate_type"))
record_id = db.Column(db.Integer, db.ForeignKey("c_records.id"), nullable=False)
first_ci_id = db.Column(db.Integer)
second_ci_id = db.Column(db.Integer)
relation_type_id = db.Column(db.Integer, db.ForeignKey("c_relation_types.id"))
relation_id = db.Column(db.Integer, nullable=False)
class CITypeHistory(Model):
__tablename__ = "c_ci_type_histories"
operate_type = db.Column(db.Enum(*CITypeOperateType.all(), name="operate_type"))
type_id = db.Column(db.Integer, index=True, nullable=False)
attr_id = db.Column(db.Integer)
trigger_id = db.Column(db.Integer)
unique_constraint_id = db.Column(db.Integer)
uid = db.Column(db.Integer, index=True)
change = db.Column(db.JSON)
# preference
class PreferenceShowAttributes(Model):
__tablename__ = "c_psa"
uid = db.Column(db.Integer, index=True, nullable=False)
type_id = db.Column(db.Integer, db.ForeignKey("c_ci_types.id"), nullable=False)
attr_id = db.Column(db.Integer, db.ForeignKey("c_attributes.id"))
order = db.Column(db.SmallInteger, default=0)
is_fixed = db.Column(db.Boolean, default=False)
ci_type = db.relationship("CIType", backref="c_psa.type_id")
attr = db.relationship("Attribute", backref="c_psa.attr_id")
class PreferenceTreeView(Model):
__tablename__ = "c_ptv"
uid = db.Column(db.Integer, index=True, nullable=False)
type_id = db.Column(db.Integer, db.ForeignKey("c_ci_types.id"), nullable=False)
levels = db.Column(db.JSON)
class PreferenceRelationView(Model):
__tablename__ = "c_prv"
uid = db.Column(db.Integer, index=True, nullable=False)
name = db.Column(db.String(64), index=True, nullable=False)
cr_ids = db.Column(db.JSON) # [{parent_id: x, child_id: y}]
is_public = db.Column(db.Boolean, default=False)
class PreferenceSearchOption(Model):
__tablename__ = "c_pso"
name = db.Column(db.String(64))
prv_id = db.Column(db.Integer, db.ForeignKey("c_prv.id"))
ptv_id = db.Column(db.Integer, db.ForeignKey("c_ptv.id"))
type_id = db.Column(db.Integer, db.ForeignKey("c_ci_types.id"))
uid = db.Column(db.Integer, index=True)
option = db.Column(db.JSON)
# custom
class CustomDashboard(Model):
__tablename__ = "c_c_d"
name = db.Column(db.String(64))
category = db.Column(db.SmallInteger) # 0: 总数统计, 1: 字段值统计, 2: 关系统计
enabled = db.Column(db.Boolean, default=False)
order = db.Column(db.Integer, default=0)
type_id = db.Column(db.Integer, db.ForeignKey('c_ci_types.id'))
attr_id = db.Column(db.Integer, db.ForeignKey('c_attributes.id'))
level = db.Column(db.Integer)
options = db.Column(db.JSON)
class SystemConfig(Model):
__tablename__ = "c_sc"
name = db.Column(db.String(64), index=True)
option = db.Column(db.JSON)
# auto discovery
class AutoDiscoveryRule(Model):
__tablename__ = "c_ad_rules"
name = db.Column(db.String(32))
type = db.Column(db.Enum(*AutoDiscoveryType.all()), index=True)
is_inner = db.Column(db.Boolean, default=False, index=True)
owner = db.Column(db.Integer, index=True)
option = db.Column(db.JSON) # layout
attributes = db.Column(db.JSON)
is_plugin = db.Column(db.Boolean, default=False)
plugin_script = db.Column(db.Text)
unique_key = db.Column(db.String(64))
class AutoDiscoveryCIType(Model):
__tablename__ = "c_ad_ci_types"
type_id = db.Column(db.Integer, db.ForeignKey('c_ci_types.id'))
adr_id = db.Column(db.Integer, db.ForeignKey('c_ad_rules.id'))
attributes = db.Column(db.JSON) # {ad_key: cmdb_key}
relation = db.Column(db.JSON) # [{ad_key: {type_id: x, attr_id: x}}]
auto_accept = db.Column(db.Boolean, default=False)
agent_id = db.Column(db.String(8), index=True)
query_expr = db.Column(db.Text)
interval = db.Column(db.Integer) # seconds
cron = db.Column(db.String(128))
extra_option = db.Column(db.JSON)
uid = db.Column(db.Integer, index=True)
class AutoDiscoveryCI(Model):
__tablename__ = "c_ad_ci"
type_id = db.Column(db.Integer, db.ForeignKey('c_ci_types.id'))
adt_id = db.Column(db.Integer, db.ForeignKey('c_ad_ci_types.id'))
unique_value = db.Column(db.String(128), index=True)
instance = db.Column(db.JSON)
ci_id = db.Column(db.Integer, index=True)
is_accept = db.Column(db.Boolean, default=False)
accept_by = db.Column(db.String(64), index=True)
accept_time = db.Column(db.DateTime)
class CIFilterPerms(Model):
__tablename__ = "c_ci_filter_perms"
name = db.Column(db.String(64), index=True)
type_id = db.Column(db.Integer, db.ForeignKey('c_ci_types.id'))
ci_filter = db.Column(db.Text)
attr_filter = db.Column(db.Text)
rid = db.Column(db.Integer, index=True)

View File

@ -0,0 +1,98 @@
# -*- coding:utf-8 -*-
from api.extensions import db
from api.lib.database import Model, TimestampMixin, SoftDeleteMixin, CRUDMixin
class ModelWithoutPK(db.Model, TimestampMixin, SoftDeleteMixin, CRUDMixin):
__table_args__ = {"extend_existing": True}
__abstract__ = True
class Department(ModelWithoutPK):
__tablename__ = 'common_department'
department_id = db.Column(db.Integer, primary_key=True, autoincrement=True)
department_name = db.Column(db.VARCHAR(255), default='')
department_director_id = db.Column(
db.Integer, default=0)
department_parent_id = db.Column(db.Integer, default=1)
sort_value = db.Column(db.Integer, default=0)
acl_rid = db.Column(db.Integer, default=0)
class Employee(ModelWithoutPK):
__tablename__ = 'common_employee'
employee_id = db.Column(db.Integer, primary_key=True, autoincrement=True)
email = db.Column(db.VARCHAR(255), default='')
username = db.Column(db.VARCHAR(255), default='')
nickname = db.Column(db.VARCHAR(255), default='')
sex = db.Column(db.VARCHAR(64), default='')
position_name = db.Column(db.VARCHAR(255), default='')
mobile = db.Column(db.VARCHAR(255), default='')
avatar = db.Column(db.VARCHAR(255), default='')
direct_supervisor_id = db.Column(db.Integer, default=0)
department_id = db.Column(db.Integer,
db.ForeignKey('common_department.department_id')
)
acl_uid = db.Column(db.Integer, default=0)
acl_rid = db.Column(db.Integer, default=0)
acl_virtual_rid = db.Column(db.Integer, default=0)
last_login = db.Column(db.TIMESTAMP, nullable=True)
block = db.Column(db.Integer, default=0)
notice_info = db.Column(db.JSON, default={})
_department = db.relationship(
'Department', backref='common_employee.department_id',
lazy='joined'
)
class EmployeeInfo(Model):
__tablename__ = 'common_employee_info'
info = db.Column(db.JSON, default={})
employee_id = db.Column(db.Integer, db.ForeignKey(
'common_employee.employee_id'))
employee = db.relationship(
'Employee', backref='common_employee.employee_id', lazy='joined')
class CompanyInfo(Model):
__tablename__ = "common_company_info_json"
info = db.Column(db.JSON)
class InternalMessage(Model):
__tablename__ = "common_internal_message"
title = db.Column(db.VARCHAR(255), nullable=True)
content = db.Column(db.TEXT, nullable=True)
path = db.Column(db.VARCHAR(255), nullable=True)
is_read = db.Column(db.Boolean, default=False)
app_name = db.Column(db.VARCHAR(128), nullable=False)
category = db.Column(db.VARCHAR(128), nullable=False)
message_data = db.Column(db.JSON, nullable=True)
employee_id = db.Column(db.Integer, db.ForeignKey('common_employee.employee_id'), comment='ID')
class CommonData(Model):
__table_name__ = 'common_data'
data_type = db.Column(db.VARCHAR(255), default='')
data = db.Column(db.JSON)
class NoticeConfig(Model):
__tablename__ = "common_notice_config"
platform = db.Column(db.VARCHAR(255), nullable=False)
info = db.Column(db.JSON)

50
cmdb-api/api/resource.py Normal file
View File

@ -0,0 +1,50 @@
# -*- coding:utf-8 -*-
import os
import sys
from inspect import getmembers
from inspect import isclass
import six
from flask import jsonify
from flask import send_file
from flask_restful import Resource
from api.lib.perm.auth import auth_required
class APIView(Resource):
method_decorators = [auth_required]
def __init__(self):
super(APIView, self).__init__()
@staticmethod
def jsonify(*args, **kwargs):
return jsonify(*args, **kwargs)
@staticmethod
def send_file(*args, **kwargs):
return send_file(*args, **kwargs)
API_PACKAGE = os.path.abspath(os.path.dirname(__file__))
def register_resources(resource_path, rest_api):
for root, _, files in os.walk(os.path.join(resource_path)):
for filename in files:
if not filename.startswith("_") and filename.endswith("py"):
if root not in sys.path:
sys.path.insert(1, root)
view = __import__(os.path.splitext(filename)[0])
resource_list = [o[0] for o in getmembers(view) if isclass(o[1]) and issubclass(o[1], Resource)]
resource_list = [i for i in resource_list if i != "APIView"]
for resource_cls_name in resource_list:
resource_cls = getattr(view, resource_cls_name)
if not hasattr(resource_cls, "url_prefix"):
resource_cls.url_prefix = ("",)
if isinstance(resource_cls.url_prefix, six.string_types):
resource_cls.url_prefix = (resource_cls.url_prefix,)
rest_api.add_resource(resource_cls, *resource_cls.url_prefix)

View File

@ -0,0 +1 @@
# -*- coding:utf-8 -*-

203
cmdb-api/api/tasks/acl.py Normal file
View File

@ -0,0 +1,203 @@
# -*- coding:utf-8 -*-
import json
import re
from celery_once import QueueOnce
from flask import current_app
from werkzeug.exceptions import BadRequest
from werkzeug.exceptions import NotFound
from api.extensions import celery
from api.extensions import db
from api.lib.perm.acl.audit import AuditCRUD
from api.lib.perm.acl.audit import AuditOperateSource
from api.lib.perm.acl.audit import AuditOperateType
from api.lib.perm.acl.cache import AppCache
from api.lib.perm.acl.cache import RoleCache
from api.lib.perm.acl.cache import RoleRelationCache
from api.lib.perm.acl.cache import UserCache
from api.lib.perm.acl.const import ACL_QUEUE
from api.lib.perm.acl.record import OperateRecordCRUD
from api.models.acl import Resource
from api.models.acl import Role
from api.models.acl import Trigger
@celery.task(base=QueueOnce,
name="acl.role_rebuild",
queue=ACL_QUEUE,
once={"graceful": True, "unlock_before_run": True})
def role_rebuild(rids, app_id):
rids = rids if isinstance(rids, list) else [rids]
for rid in rids:
RoleRelationCache.rebuild(rid, app_id)
current_app.logger.info("Role {0} App {1} rebuild..........".format(rids, app_id))
@celery.task(name="acl.update_resource_to_build_role", queue=ACL_QUEUE)
def update_resource_to_build_role(resource_id, app_id, group_id=None):
rids = [i.id for i in Role.get_by(__func_isnot__key_uid=None, fl='id', to_dict=False)]
rids += [i.id for i in Role.get_by(app_id=app_id, fl='id', to_dict=False)]
rids += [i.id for i in Role.get_by(__func_is___key_uid=None, __func_is___key_app_id=None, fl='id', to_dict=False)]
current_app.logger.info(rids)
for rid in rids:
if resource_id and resource_id in RoleRelationCache.get_resources(rid, app_id).get('id2perms', {}):
RoleRelationCache.rebuild2(rid, app_id)
if group_id and group_id in RoleRelationCache.get_resources(rid, app_id).get('group2perms', {}):
RoleRelationCache.rebuild2(rid, app_id)
@celery.task(name="acl.apply_trigger", queue=ACL_QUEUE)
def apply_trigger(_id, resource_id=None, operator_uid=None):
db.session.remove()
from api.lib.perm.acl.permission import PermissionCRUD
trigger = Trigger.get_by_id(_id)
if trigger is None:
return
uid = json.loads(trigger.uid or '[]')
if resource_id is None:
wildcard = (trigger.wildcard or '')
if wildcard and uid:
query = Resource.get_by(__func_in___key_uid=uid,
app_id=trigger.app_id,
resource_type_id=trigger.resource_type_id,
fl=['id', 'app_id'],
only_query=True)
try:
re.compile(wildcard)
resources = query.filter(Resource.name.op('regexp')(wildcard)).all()
except:
resources = query.filter(Resource.name.ilike(wildcard.replace('*', '%'))).all()
elif wildcard:
query = Resource.get_by(app_id=trigger.app_id,
resource_type_id=trigger.resource_type_id,
only_query=True)
try:
re.compile(wildcard)
resources = query.filter(Resource.name.op('regexp')(wildcard)).all()
except:
resources = query.filter(Resource.name.ilike(wildcard.replace('*', '%'))).all()
elif uid:
resources = Resource.get_by(__func_in___key_uid=uid,
app_id=trigger.app_id,
resource_type_id=trigger.resource_type_id,
to_dict=False)
else:
resources = []
else:
resources = [Resource.get_by_id(resource_id)]
perms = json.loads(trigger.permissions)
roles = json.loads(trigger.roles)
for resource in resources:
for rid in roles:
try:
PermissionCRUD.grant(rid, perms, resource.id, rebuild=False, source=AuditOperateSource.trigger)
except (NotFound, BadRequest):
pass
AuditCRUD.add_trigger_log(trigger.app_id, trigger.id, AuditOperateType.trigger_apply, {}, trigger.to_dict(),
{'uid': uid,
'resource_ids': [r.id for r in resources],
'perms': perms,
'rids': roles},
uid=operator_uid, source=AuditOperateSource.trigger)
if resources:
role_rebuild(roles, resources[0].app_id)
@celery.task(name="acl.cancel_trigger", queue=ACL_QUEUE)
def cancel_trigger(_id, resource_id=None, operator_uid=None):
db.session.remove()
from api.lib.perm.acl.permission import PermissionCRUD
trigger = Trigger.get_by_id(_id)
if trigger is None:
return
uid = json.loads(trigger.uid or '[]')
if resource_id is None:
wildcard = (trigger.wildcard or '')
if wildcard and uid:
query = Resource.get_by(__func_in___key_uid=uid,
app_id=trigger.app_id,
resource_type_id=trigger.resource_type_id,
fl=['id', 'app_id'],
only_query=True)
try:
re.compile(wildcard)
resources = query.filter(Resource.name.op('regexp')(wildcard)).all()
except:
resources = query.filter(Resource.name.ilike(wildcard.replace('*', '%'))).all()
elif wildcard:
query = Resource.get_by(app_id=trigger.app_id,
resource_type_id=trigger.resource_type_id,
only_query=True)
try:
re.compile(wildcard)
resources = query.filter(Resource.name.op('regexp')(wildcard)).all()
except:
resources = query.filter(Resource.name.ilike(wildcard.replace('*', '%'))).all()
elif uid:
resources = Resource.get_by(__func_in___key_uid=uid,
app_id=trigger.app_id,
resource_type_id=trigger.resource_type_id,
to_dict=False)
else:
resources = []
else:
resources = [Resource.get_by_id(resource_id)]
perms = json.loads(trigger.permissions)
roles = json.loads(trigger.roles)
for resource in resources:
if not resource:
continue
for rid in roles:
try:
PermissionCRUD.revoke(rid, perms, resource.id, rebuild=False, source=AuditOperateSource.trigger)
except (NotFound, BadRequest):
pass
AuditCRUD.add_trigger_log(trigger.app_id, trigger.id, AuditOperateType.trigger_cancel, {}, trigger.to_dict(),
{'uid': uid,
'resource_ids': [r.id for r in resources if r],
'perms': perms,
'rids': roles},
uid=operator_uid, source=AuditOperateSource.trigger)
if resources:
role_rebuild(roles, resources[0].app_id)
@celery.task(name="acl.op_record", queue=ACL_QUEUE)
def op_record(app, rolename, operate_type, obj):
if isinstance(app, int):
app = AppCache.get(app)
app = app and app.name
if isinstance(rolename, int):
u = UserCache.get(rolename)
if u:
rolename = u.username
if not u:
r = RoleCache.get(rolename)
if r:
rolename = r.name
OperateRecordCRUD.add(app, rolename, operate_type, obj)

203
cmdb-api/api/tasks/cmdb.py Normal file
View File

@ -0,0 +1,203 @@
# -*- coding:utf-8 -*-
import json
import time
from flask import current_app
from flask_login import login_user
import api.lib.cmdb.ci
from api.extensions import celery
from api.extensions import db
from api.extensions import es
from api.extensions import rd
from api.lib.cmdb.cache import CITypeAttributesCache
from api.lib.cmdb.const import CMDB_QUEUE
from api.lib.cmdb.const import REDIS_PREFIX_CI
from api.lib.cmdb.const import REDIS_PREFIX_CI_RELATION
from api.lib.perm.acl.cache import UserCache
from api.lib.utils import Lock
from api.lib.utils import handle_arg_list
from api.models.cmdb import CI
from api.models.cmdb import CIRelation
from api.models.cmdb import CITypeAttribute
@celery.task(name="cmdb.ci_cache", queue=CMDB_QUEUE)
def ci_cache(ci_id, operate_type, record_id):
from api.lib.cmdb.ci import CITriggerManager
time.sleep(0.01)
db.session.remove()
m = api.lib.cmdb.ci.CIManager()
ci_dict = m.get_ci_by_id_from_db(ci_id, need_children=False, use_master=False)
if current_app.config.get("USE_ES"):
es.create_or_update(ci_id, ci_dict)
else:
rd.create_or_update({ci_id: json.dumps(ci_dict)}, REDIS_PREFIX_CI)
current_app.logger.info("{0} flush..........".format(ci_id))
if operate_type:
current_app.test_request_context().push()
login_user(UserCache.get('worker'))
CITriggerManager.fire(operate_type, ci_dict, record_id)
@celery.task(name="cmdb.batch_ci_cache", queue=CMDB_QUEUE)
def batch_ci_cache(ci_ids, ): # only for attribute change index
time.sleep(1)
db.session.remove()
for ci_id in ci_ids:
m = api.lib.cmdb.ci.CIManager()
ci_dict = m.get_ci_by_id_from_db(ci_id, need_children=False, use_master=False)
if current_app.config.get("USE_ES"):
es.create_or_update(ci_id, ci_dict)
else:
rd.create_or_update({ci_id: json.dumps(ci_dict)}, REDIS_PREFIX_CI)
current_app.logger.info("{0} flush..........".format(ci_id))
@celery.task(name="cmdb.ci_delete", queue=CMDB_QUEUE)
def ci_delete(ci_id):
current_app.logger.info(ci_id)
if current_app.config.get("USE_ES"):
es.delete(ci_id)
else:
rd.delete(ci_id, REDIS_PREFIX_CI)
current_app.logger.info("{0} delete..........".format(ci_id))
@celery.task(name="cmdb.ci_delete_trigger", queue=CMDB_QUEUE)
def ci_delete_trigger(trigger, operate_type, ci_dict):
current_app.logger.info('delete ci {} trigger'.format(ci_dict['_id']))
from api.lib.cmdb.ci import CITriggerManager
current_app.test_request_context().push()
login_user(UserCache.get('worker'))
CITriggerManager.fire_by_trigger(trigger, operate_type, ci_dict)
@celery.task(name="cmdb.ci_relation_cache", queue=CMDB_QUEUE)
def ci_relation_cache(parent_id, child_id):
db.session.remove()
with Lock("CIRelation_{}".format(parent_id)):
children = rd.get([parent_id], REDIS_PREFIX_CI_RELATION)[0]
children = json.loads(children) if children is not None else {}
cr = CIRelation.get_by(first_ci_id=parent_id, second_ci_id=child_id, first=True, to_dict=False)
if str(child_id) not in children:
children[str(child_id)] = cr.second_ci.type_id
rd.create_or_update({parent_id: json.dumps(children)}, REDIS_PREFIX_CI_RELATION)
current_app.logger.info("ADD ci relation cache: {0} -> {1}".format(parent_id, child_id))
@celery.task(name="cmdb.ci_relation_add", queue=CMDB_QUEUE)
def ci_relation_add(parent_dict, child_id, uid):
"""
:param parent_dict: key is '$parent_model.attr_name'
:param child_id:
:param uid:
:return:
"""
from api.lib.cmdb.ci import CIRelationManager
from api.lib.cmdb.ci_type import CITypeAttributeManager
from api.lib.cmdb.search import SearchError
from api.lib.cmdb.search.ci import search
current_app.test_request_context().push()
login_user(UserCache.get(uid))
db.session.remove()
for parent in parent_dict:
parent_ci_type_name, _attr_name = parent.strip()[1:].split('.', 1)
attr_name = CITypeAttributeManager.get_attr_name(parent_ci_type_name, _attr_name)
if attr_name is None:
current_app.logger.warning("attr name {} does not exist".format(_attr_name))
continue
parent_dict[parent] = handle_arg_list(parent_dict[parent])
for v in parent_dict[parent]:
query = "_type:{},{}:{}".format(parent_ci_type_name, attr_name, v)
s = search(query)
try:
response, _, _, _, _, _ = s.search()
except SearchError as e:
current_app.logger.error('ci relation add failed: {}'.format(e))
continue
for ci in response:
try:
CIRelationManager.add(ci['_id'], child_id)
ci_relation_cache(ci['_id'], child_id)
except Exception as e:
current_app.logger.warning(e)
finally:
db.session.remove()
@celery.task(name="cmdb.ci_relation_delete", queue=CMDB_QUEUE)
def ci_relation_delete(parent_id, child_id):
with Lock("CIRelation_{}".format(parent_id)):
children = rd.get([parent_id], REDIS_PREFIX_CI_RELATION)[0]
children = json.loads(children) if children is not None else {}
if str(child_id) in children:
children.pop(str(child_id))
rd.create_or_update({parent_id: json.dumps(children)}, REDIS_PREFIX_CI_RELATION)
current_app.logger.info("DELETE ci relation cache: {0} -> {1}".format(parent_id, child_id))
@celery.task(name="cmdb.ci_type_attribute_order_rebuild", queue=CMDB_QUEUE)
def ci_type_attribute_order_rebuild(type_id, uid):
current_app.logger.info('rebuild attribute order')
db.session.remove()
from api.lib.cmdb.ci_type import CITypeAttributeGroupManager
attrs = CITypeAttributesCache.get(type_id)
id2attr = {attr.attr_id: attr for attr in attrs}
current_app.test_request_context().push()
login_user(UserCache.get(uid))
res = CITypeAttributeGroupManager.get_by_type_id(type_id, True)
order = 0
for group in res:
for _attr in group.get('attributes'):
if order != id2attr.get(_attr['id']) and id2attr.get(_attr['id']):
id2attr.get(_attr['id']).update(order=order)
order += 1
@celery.task(name="cmdb.calc_computed_attribute", queue=CMDB_QUEUE)
def calc_computed_attribute(attr_id, uid):
from api.lib.cmdb.ci import CIManager
db.session.remove()
current_app.test_request_context().push()
login_user(UserCache.get(uid))
cim = CIManager()
for i in CITypeAttribute.get_by(attr_id=attr_id, to_dict=False):
cis = CI.get_by(type_id=i.type_id, to_dict=False)
for ci in cis:
cim.update(ci.id, {})

View File

@ -0,0 +1,77 @@
# -*- coding:utf-8 -*-
import requests
from flask import current_app
from api.extensions import celery
from api.extensions import db
from api.lib.common_setting.acl import ACLManager
from api.lib.common_setting.const import COMMON_SETTING_QUEUE
from api.lib.common_setting.resp_format import ErrFormat
from api.models.common_setting import Department
@celery.task(name="common_setting.edit_employee_department_in_acl", queue=COMMON_SETTING_QUEUE)
def edit_employee_department_in_acl(e_list, new_d_id, op_uid):
"""
:param e_list:{acl_rid: 11, department_id: 22}
:param new_d_id
:param op_uid
"""
db.session.remove()
result = []
new_department = Department.get_by(
first=True, department_id=new_d_id, to_dict=False)
if not new_department:
result.append(ErrFormat.new_department_is_none)
return result
acl = ACLManager('acl', str(op_uid))
role_map = {role['name']: role['id'] for role in acl.get_all_roles()}
new_d_rid_in_acl = role_map.get(new_department.department_name, 0)
if new_d_rid_in_acl == 0:
return
if new_d_rid_in_acl != new_department.acl_rid:
new_department.update(
acl_rid=new_d_rid_in_acl
)
new_department_acl_rid = new_department.acl_rid if new_d_rid_in_acl == new_department.acl_rid else new_d_rid_in_acl
for employee in e_list:
old_department = Department.get_by(
first=True, department_id=employee.get('department_id'), to_dict=False)
if not old_department:
continue
employee_acl_rid = employee.get('e_acl_rid')
if employee_acl_rid == 0:
result.append(ErrFormat.employee_acl_rid_is_zero)
continue
old_d_rid_in_acl = role_map.get(old_department.department_name, 0)
if old_d_rid_in_acl == 0:
return
if old_d_rid_in_acl != old_department.acl_rid:
old_department.update(
acl_rid=old_d_rid_in_acl
)
d_acl_rid = old_department.acl_rid if old_d_rid_in_acl == old_department.acl_rid else old_d_rid_in_acl
payload = {
'app_id': 'acl',
'parent_id': d_acl_rid,
}
try:
acl.remove_user_from_role(employee_acl_rid, payload)
except Exception as e:
result.append(ErrFormat.acl_remove_user_from_role_failed.format(str(e)))
payload = {
'app_id': 'acl',
'child_ids': [employee_acl_rid],
}
try:
acl.add_user_to_role(new_department_acl_rid, payload)
except Exception as e:
result.append(ErrFormat.acl_add_user_to_role_failed.format(str(e)))
return result

View File

@ -0,0 +1,2 @@
# -*- coding:utf-8 -*-

Some files were not shown because too many files have changed in this diff Show More