Compare commits

...

63 commits

Author SHA1 Message Date
8e6dd476bf enhancement(src/risotto/image.py): better messages
close #11
2023-08-01 14:58:49 +02:00
55b0140edd better tls zone calculation 2023-07-31 19:11:01 +02:00
2edf4d2e86 better tls zone calculation 2023-07-31 18:50:05 +02:00
1ce00d81cb better tls zone calculation 2023-07-31 18:45:36 +02:00
6d8219b22c fix(src/risotto/rougail/annotator.py): do not raise if suppliers not exists 2023-07-31 17:05:55 +02:00
96132ae60a enhancement(ansible/host.yml) remove packages 2023-07-31 15:27:56 +02:00
db86a3d02c enhancement(src/risotto/machine.py) include functions directly in the cache file and add debuging lines 2023-07-28 08:47:26 +02:00
daf4833691 bug(ansible/sbin/update_images) better problem detection 2023-07-28 08:44:54 +02:00
cebb8f970b bug(ansible/sbin/make_changelog) do not failed is a package is view as already set
close #9
2023-07-28 08:43:29 +02:00
d0aaf99890 enhancement(ansible/inventory.py) add --quit argument
close #8
2023-07-28 08:40:49 +02:00
833f6e8d4d bug(ansible/host.yml) install Vector before trying to starts service
close #7
2023-07-28 08:40:02 +02:00
75bb6b0765 enhancement(src/risotto/rougail/annotator.py) refactor
close #6
2023-07-28 08:39:36 +02:00
54ff8f23ed better update_images script 2023-06-22 16:22:54 +02:00
63931b8880 debug 2023-06-22 16:21:16 +02:00
186c886e84 tiramisu 3 to 4 2023-06-22 16:19:44 +02:00
5e735cf453 tls_server is a special machine 2023-06-22 16:16:32 +02:00
7ebcff86cb do not create srv/journald directory with ansible 2023-06-22 16:16:01 +02:00
0705409393 set machines_changed to machines.xml 2023-06-22 16:15:29 +02:00
84315c7bae configure timezone/hostname in host 2023-06-22 16:12:09 +02:00
f0d3ca8f43 install vector in host 2023-06-22 16:11:49 +02:00
88fc83014b diagnose: better debug 2023-06-22 15:52:45 +02:00
120af2ff01 not stop/start machine during backup 2023-06-22 15:51:16 +02:00
44eb0031de templating is now done by ansible 2023-06-22 15:49:09 +02:00
bfa697f457 add filter from dataset 2023-06-22 15:45:55 +02:00
0b0503e109 support raise in template 2023-06-22 15:45:05 +02:00
7f745b3609 tls_machine is not mandatory 2023-06-22 15:37:29 +02:00
4bacf38188 module_name is in general family 2023-06-22 15:34:04 +02:00
79a549cd4c add normalize_family filter 2023-06-22 15:15:09 +02:00
7b74984fb7 auto calculate zones 2023-03-03 14:06:30 +01:00
b1098a966b only restart official service 2023-03-03 08:34:55 +01:00
5d969ada38 remove modules in infrastructure 2023-03-02 21:58:24 +01:00
bb35d6cf3e modif 2023-02-27 14:03:56 +01:00
afd28627f3 simplify ansible role 2023-01-23 20:23:32 +01:00
124c6b56d2 add risotto_auto_doc script 2022-12-25 17:21:03 +01:00
23bacdb9c6 add official dataset ref 2022-12-25 17:18:18 +01:00
11029ea231 better error informations 2022-12-25 17:17:59 +01:00
212699f571 reorganise 2022-12-21 16:35:58 +01:00
6e38f1c4d1 add backup images 2022-12-21 16:28:09 +01:00
f79e486371 remove old scripts 2022-12-21 16:27:33 +01:00
5bec5dffed documentation 2022-12-21 16:14:27 +01:00
e1a447da7d add logo and schema 2022-11-12 11:44:23 +01:00
Emmanuel Garette
14a2cc65f9 better config change detection 2022-10-17 18:52:42 +02:00
Emmanuel Garette
8895c3ee9e machinectl: add enabled 2022-10-17 18:51:54 +02:00
Emmanuel Garette
de48994d76 ansible: can delete old image before rebuild 2022-10-17 18:51:04 +02:00
Emmanuel Garette
34d277d80f src/risotto/image.py: display depends if failed 2022-10-17 18:49:34 +02:00
Emmanuel Garette
74878cae0f split update_images and diagnose 2022-10-17 18:48:32 +02:00
Emmanuel Garette
b9be6491cc add diagnose command 2022-10-17 18:44:00 +02:00
Emmanuel Garette
30a605a81c convert to ansible 2022-10-01 22:33:11 +02:00
Emmanuel Garette
e3bca44f3a only one config 2022-08-21 19:03:38 +02:00
Emmanuel Garette
231125be0c remove application service version 2022-07-01 22:13:16 +02:00
Emmanuel Garette
6b65f80919 copy original template 2022-07-01 18:57:18 +02:00
Emmanuel Garette
cb1ab19099 better hide_secret support 2022-06-26 19:34:26 +02:00
Emmanuel Garette
a400e81fbe add 'provider' support in applicationservice.yml 2022-06-25 08:11:05 +02:00
Emmanuel Garette
382a5322a6 split code 2022-06-24 19:02:45 +02:00
Emmanuel Garette
23adccacb3 get_ip_domain 2022-05-23 08:48:36 +02:00
Emmanuel Garette
76a99dc532 set_linked_multi_variables 2022-04-28 21:46:26 +02:00
Emmanuel Garette
127dba8c52 add set_linked_multi_variables function 2022-03-20 21:17:24 +01:00
c8430f440e Merge pull request 'issue/certif_add_chain' (#4) from gnunux/risotto:issue/certif_add_chain into main
Reviewed-on: https://cloud.silique.fr/gitea/risotto/risotto/pulls/4
2022-03-15 11:57:00 +00:00
Emmanuel Garette
02e96812ae add chain in certificate 2022-03-15 12:56:25 +01:00
9d12af51c0 Merge pull request 'main' (#1) from main into issue/certif_add_chain
Reviewed-on: https://cloud.silique.fr/gitea/gnunux/risotto/pulls/1
2022-03-15 09:56:32 +00:00
c37b096838 Merge pull request 'Add some dependencies and split set-up steps.' (#3) from bbohard/risotto:issues/1_README.md into main
Reviewed-on: https://cloud.silique.fr/gitea/risotto/risotto/pulls/3
2022-03-15 08:22:18 +00:00
b2fcdac493 Merge pull request 'Rename script used to build configuration.' (#2) from bbohard/risotto:issues/2_init_script_name into main
Reviewed-on: https://cloud.silique.fr/gitea/risotto/risotto/pulls/2
2022-03-15 08:21:27 +00:00
Benjamin Bohard
7c26744cbd Rename script used to build configuration.
Ref #2
2022-03-12 11:10:19 +01:00
78 changed files with 9330 additions and 1035 deletions

View file

@ -1,9 +1,11 @@
![Logo Risotto](logo.png "logo risotto")
# Risotto
## Install dependencies
- python3-dkimpy
- python3-cheetay
- python3-cheetah
- ldns-utils
## Installation
@ -15,6 +17,10 @@ Clone projects:
- https://cloud.silique.fr/gitea/risotto/rougail
- https://cloud.silique.fr/gitea/risotto/risotto
## Documentation
[Documentation](doc/README.md)
## Set up
Set up Risotto:
@ -29,7 +35,7 @@ In risotto.conf change the dataset directory.
Set up infrasctructure:
```bash
cp server.json.example server.json
cp server.yml.example server.yml
```
Modify infrastructure description as required.
@ -43,37 +49,5 @@ Generate the configuration:
Send configuration to remote server:
```bash
HOST=cloud.silique.fr
rm -f installations.tar
tar -cf installations.tar installations
scp installations.tar root@$HOST:
```
## Deploy
In host:
```bash
cd
rm -rf installations
tar xf installations.tar
cd installations
```
Set up host:
```bash
./install_host cloud.silique.fr
```
Build container image:
```bash
./install_images cloud.silique.fr
```
Set up the containers and start them up:
```bash
./install_machines cloud.silique.fr
ansible-playbook -i ansible/inventory.py ansible/playbook.yml
```

0
ansible/__init__.py Normal file
View file

View file

View file

@ -0,0 +1,44 @@
#!/usr/bin/python3
from os import listdir, makedirs
from os.path import isfile, isdir, join, dirname
from shutil import copy2, copytree, rmtree
from ansible.plugins.action import ActionBase
from risotto.utils import RISOTTO_CONFIG
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
super(ActionModule, self).run(tmp, task_vars)
module_args = self._task.args.copy()
modules = module_args['modules']
copy_tests = module_args.get('copy_tests', False)
dataset_directories = RISOTTO_CONFIG['directories']['datasets']
install_dir = join('/tmp/risotto/images')
if isdir(install_dir):
rmtree(install_dir)
if copy_tests:
install_tests_dir = join('/tmp/risotto/tests')
if isdir(install_tests_dir):
rmtree(install_tests_dir)
for module_name, depends in modules.items():
for dataset_directory in dataset_directories:
for depend in depends:
if copy_tests:
tests_dir = join(dataset_directory, depend, 'tests')
if isdir(tests_dir):
for filename in listdir(tests_dir):
src_file = join(tests_dir, filename)
dst_file = join(install_tests_dir, module_name, filename)
copy(src_file, dst_file)
# manual = join(dataset_directory, depend, 'manual', 'image')
# if not isdir(manual):
# continue
# for filename in listdir(manual):
# src_file = join(manual, filename)
# dst_file = join(install_dir, module_name, filename)
# copy(src_file, dst_file)
return dict(ansible_facts=dict({}))

View file

@ -0,0 +1,14 @@
#!/usr/bin/python3
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
super(ActionModule, self).run(tmp, task_vars)
module_args = self._task.args.copy()
module_return = self._execute_module(module_name='machinectl',
module_args=module_args,
task_vars=task_vars, tmp=tmp)
if module_return.get('failed'):
return module_return
return {'ansible_facts': {}, 'changed': module_return['changed']}

View file

@ -0,0 +1,280 @@
#!/usr/bin/python3
from os import readlink, walk, chdir, getcwd, makedirs
from os.path import join, islink, isdir
from typing import Dict, Any
from shutil import rmtree, copy2
import tarfile
from ansible.module_utils._text import to_text
from ansible import constants
from rougail.template import base
from rougail.error import TemplateError
from risotto.machine import build_files, INSTALL_DIR, INSTALL_CONFIG_DIR, INSTALL_TMPL_DIR, INSTALL_IMAGES_DIR, INSTALL_TESTS_DIR
from risotto.utils import custom_filters
try:
from ansible.plugins.action import ActionBase
from ansible.module_utils.basic import AnsibleModule
class FakeModule(AnsibleModule):
def __init__(self):
pass
from ansible.plugins.action.template import ActionModule as TmplActionModule
except:
class ActionBase():
def __init__(self, *args, **kwargs):
raise Exception('works only with ansible')
ARCHIVES_DIR = '/tmp/new_configurations'
def is_diff(server_name,
remote_directories,
certificates,
):
ret = {}
module = FakeModule()
current_path = getcwd()
root = join(INSTALL_DIR, INSTALL_CONFIG_DIR, server_name)
chdir(root)
search_paths = [join(directory[2:], f) for directory, subdirectories, files in walk('.') for f in files]
chdir(current_path)
for path in search_paths:
if path not in remote_directories:
return True
full_path = join(root, path)
if not islink(full_path):
if remote_directories[path] != module.digest_from_file(full_path, 'sha256'):
return True
elif remote_directories[path] != readlink(full_path):
return True
remote_directories.pop(path)
if remote_directories:
for certificate in certificates:
for typ in ['name', 'private', 'authority']:
if not typ in certificate:
continue
name = certificate[typ][1:]
if name in remote_directories:
remote_directories.pop(name)
if remote_directories:
return True
return False
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
super(ActionModule, self).run(tmp, task_vars)
module_args = self._task.args.copy()
hostname = module_args.pop('hostname')
only_machine = module_args.pop('only_machine')
configure_host = module_args.pop('configure_host')
copy_tests = module_args.pop('copy_tests')
# define ansible engine
base.ENGINES['ansible'] = Tmpl(task_vars,
self._task,
self._connection,
self._play_context,
self._loader,
self._templar,
self._shared_loader_obj,
)
if 'copy_templates' in module_args:
copy_templates = module_args.pop('copy_templates')
else:
copy_templates = False
directories, certificates = build_files(hostname,
only_machine,
False,
copy_tests,
)
module_args['directories'] = list(directories.values())
module_args['directories'].append('/var/lib/risotto/images_files')
remote = self._execute_module(module_name='compare',
module_args=module_args,
task_vars=task_vars,
)
if remote.get('failed'):
if 'module_stdout' in remote:
msg = remote['module_stdout']
else:
msg = remote['msg']
raise Exception(f'error in remote action: {msg}')
if copy_templates:
build_files(hostname,
only_machine,
True,
copy_tests,
)
machines_changed = []
for machine, directory in directories.items():
if directory not in remote['directories']:
machines_changed.append(machine)
continue
if is_diff(machine,
remote['directories'][directory],
certificates['certificates'].get(machine, []),
):
machines_changed.append(machine)
current_path = getcwd()
if isdir(ARCHIVES_DIR):
rmtree(ARCHIVES_DIR)
makedirs(ARCHIVES_DIR)
if machines_changed:
self._execute_module(module_name='file',
module_args={'path': ARCHIVES_DIR,
'state': 'absent',
},
task_vars=task_vars,
)
self._execute_module(module_name='file',
module_args={'path': ARCHIVES_DIR,
'state': 'directory',
},
task_vars=task_vars,
)
machines = machines_changed.copy()
if self._task.args['hostname'] in machines_changed:
machine = self._task.args['hostname']
machines.remove(machine)
chdir(f'{task_vars["host_install_dir"]}/{INSTALL_CONFIG_DIR}/{machine}')
tar_filename = f'{ARCHIVES_DIR}/host.tar'
with tarfile.open(tar_filename, 'w') as archive:
archive.add('.')
chdir(current_path)
self._transfer_file(tar_filename, tar_filename)
# archive and send
if machines:
chdir(f'{task_vars["host_install_dir"]}/{INSTALL_CONFIG_DIR}')
tar_filename = f'{ARCHIVES_DIR}/machines.tar'
with tarfile.open(tar_filename, 'w') as archive:
for machine in machines:
if machine == self._task.args['hostname']:
continue
archive.add(f'{machine}')
self._transfer_file(tar_filename, tar_filename)
else:
machines = []
# archive and send
chdir(f'{task_vars["host_install_dir"]}/{INSTALL_IMAGES_DIR}/')
tar_filename = f'{ARCHIVES_DIR}/{INSTALL_IMAGES_DIR}.tar'
with tarfile.open(tar_filename, 'w') as archive:
archive.add('.')
self._execute_module(module_name='file',
module_args={'path': '/tmp/new_configurations',
'state': 'directory',
},
task_vars=task_vars,
)
self._transfer_file(tar_filename, tar_filename)
# tests
self._execute_module(module_name='file',
module_args={'path': '/var/lib/risotto/tests',
'state': 'absent',
},
task_vars=task_vars,
)
if copy_tests:
chdir(f'{task_vars["host_install_dir"]}/{INSTALL_TESTS_DIR}/')
tar_filename = f'{ARCHIVES_DIR}/{INSTALL_TESTS_DIR}.tar'
with tarfile.open(tar_filename, 'w') as archive:
archive.add('.')
self._transfer_file(tar_filename, tar_filename)
# templates
self._execute_module(module_name='file',
module_args={'path': '/var/lib/risotto/templates',
'state': 'absent',
},
task_vars=task_vars,
)
if copy_templates:
chdir(f'{task_vars["host_install_dir"]}/')
tar_filename = f'{ARCHIVES_DIR}/{INSTALL_TMPL_DIR}.tar'
with tarfile.open(tar_filename, 'w') as archive:
archive.add(INSTALL_TMPL_DIR)
self._transfer_file(tar_filename, tar_filename)
remote = self._execute_module(module_name='unarchive',
module_args={'remote_src': True,
'src': '/tmp/new_configurations/templates.tar',
'dest': '/var/lib/risotto',
},
task_vars=task_vars,
)
chdir(current_path)
changed = machines_changed != []
return dict(ansible_facts=dict({}),
changed=changed,
machines_changed=machines,
host_changed=self._task.args['hostname'] in machines_changed,
)
class FakeCopy:
def __init__(self, task):
self.task = task
def run(self, *args, **kwargs):
copy2(self.task.args['src'], self.task.args['dest'])
return {}
class FakeGet:
def __init__(self, klass):
self.klass = klass
def fake_get(self, action, *args, task, **kwargs):
if action == 'ansible.legacy.copy':
return FakeCopy(task)
return self.klass.ori_get(action, *args, task=task, **kwargs)
class Tmpl(TmplActionModule):
def __init__(self, task_vars, *args):
super().__init__(*args)
self.task_vars = task_vars
def _early_needs_tmp_path(self):
# do not create tmp remotely
return False
def process(self,
filename: str,
source: str,
true_destfilename: str,
destfilename: str,
destdir: str,
variable: Any,
index: int,
rougail_variables_dict: Dict,
eosfunc: Dict,
extra_variables: Any=None,
):
if source is not None: # pragma: no cover
raise TemplateError(_('source is not supported for ansible'))
task_vars = rougail_variables_dict | self.task_vars
if variable is not None:
task_vars['rougail_variable'] = variable
if index is not None:
task_vars['rougail_index'] = index
if extra_variables:
task_vars['extra_variables'] = extra_variables
task_vars['rougail_filename'] = true_destfilename
task_vars['rougail_destination_dir'] = destdir
self._task.args['src'] = filename
self._task.args['dest'] = destfilename
# add custom filter
custom_filters.update(eosfunc)
# do not copy file in host but stay it locally
self._shared_loader_obj.action_loader.ori_get = self._shared_loader_obj.action_loader.get
self._shared_loader_obj.action_loader.get = FakeGet(self._shared_loader_obj.action_loader).fake_get
# template
ret = self.run(task_vars=task_vars)
# restore get function
self._shared_loader_obj.action_loader.get = self._shared_loader_obj.action_loader.ori_get
# remove custom filter
custom_filters.clear()
if ret.get('failed'):
raise TemplateError(f'error while templating "{filename}": {ret["msg"]}')

1
ansible/file.txt Normal file
View file

@ -0,0 +1 @@
{'pouet': 'a'}

View file

@ -0,0 +1,8 @@
from risotto.utils import custom_filters
class FilterModule:
"""This filter is used to load custom filter from dataset
"""
def filters(self):
return custom_filters

View file

@ -0,0 +1,140 @@
#!/usr/bin/python3
from os.path import dirname
def _add(files, file_data, name, name_only, prefix):
if prefix is not None:
name = prefix + name
if name_only:
files.append(name)
else:
files.append({'name': name,
'owner': file_data['owner'],
'group': file_data['group'],
'mode': file_data['mode'],
})
def fileslist(data, is_host=False, name_only=False, prefix=None):
files = []
if is_host:
base_systemd = '/usr/local/lib'
else:
base_systemd = ''
_add(files,
{'owner': 'root', 'group': 'root', 'mode': '0755'},
f'/tmpfiles.d/0rougail.conf',
name_only,
prefix,
)
for service, service_data in data.items():
if not service_data['activate']:
if service_data['manage']:
if not service_data.get('undisable', False) and not service_data['engine'] and not service_data.get('target'):
_add(files,
{'owner': 'root', 'group': 'root', 'mode': '0755'},
base_systemd + '/systemd/system/' + service_data['doc'],
name_only,
prefix,
)
else:
if service_data['manage'] and service_data['engine']:
_add(files,
{'owner': 'root', 'group': 'root', 'mode': '0755'},
base_systemd + '/systemd/system/' + service_data['doc'],
name_only,
prefix,
)
if service_data.get('target'):
_add(files,
{'owner': 'root', 'group': 'root', 'mode': '0755'},
f'/systemd/system/{service_data["target"]}.target.wants/{service_data["doc"]}',
name_only,
prefix,
)
if 'overrides' in service_data:
for override_data in service_data['overrides'].values():
_add(files,
{'owner': 'root', 'group': 'root', 'mode': '0755'},
base_systemd + '/systemd/system/' + override_data['name'] + '.d/rougail.conf',
name_only,
prefix,
)
if 'ip' in service_data:
_add(files,
{'owner': 'root', 'group': 'root', 'mode': '0755'},
base_systemd + '/systemd/system/' + service_data['doc'] + '.d/rougail_ip.conf',
name_only,
prefix,
)
if 'files' not in service_data:
continue
for file_data in service_data['files'].values():
if not file_data['activate'] or file_data['included'] == 'content':
continue
if isinstance(file_data['name'], list):
for name in file_data['name']:
_add(files, file_data, name, name_only, prefix)
else:
_add(files, file_data, file_data['name'], name_only, prefix)
return files
def directorieslist(data):
directories = {'/usr/local/lib/systemd/system/'}
for service, service_data in data.items():
if 'files' not in service_data:
continue
for file_data in service_data['files'].values():
if not file_data['activate']:
continue
if isinstance(file_data['name'], list):
for name in file_data['name']:
directories.add(dirname(name))
else:
directories.add(dirname(file_data['name']))
return list(directories)
def machineslist(data, only=None, only_name=False):
srv = []
if only is not None:
if only not in data:
raise Exception(f"cannot find {only} but only {data.keys()}")
if only_name:
srv.append(only)
else:
srv.append({'name': only,
'srv': data[only]['machine']['add_srv'],
}
)
else:
for host, host_data in data.items():
if '.' not in host or not isinstance(host_data, dict) or 'general' not in host_data or host_data['general']['module_name'] == 'host':
continue
if only_name:
srv.append(host)
else:
srv.append({'name': host,
'srv': host_data['machine']['add_srv'],
}
)
return srv
def modulename(data, servername):
return data[servername]['general']['module_name']
class FilterModule:
def filters(self):
return {
'fileslist': fileslist,
'directorieslist': directorieslist,
'machineslist': machineslist,
'modulename': modulename,
}

View file

@ -0,0 +1,32 @@
#!/usr/bin/python3
"""
Silique (https://www.silique.fr)
Copyright (C) 2023
distribued with GPL-2 or later license
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
"""
from rougail.utils import normalize_family
class FilterModule:
def filters(self):
return {
'normalize_family': normalize_family,
}

View file

@ -0,0 +1,11 @@
from jinja2.exceptions import TemplateRuntimeError
def fraise(msg):
raise TemplateRuntimeError(msg)
class FilterModule:
def filters(self):
return {
'raise': fraise,
}

101
ansible/host.yml Normal file
View file

@ -0,0 +1,101 @@
---
- name: "Populate service facts"
service_facts:
- name: "Set timezone"
timezone:
name: Europe/Paris
- name: Set a hostname
ansible.builtin.hostname:
name: "{{ inventory_hostname }}"
- name: "Install packages"
apt:
pkg: "{{ vars[inventory_hostname]['general']['host_packages'] }}"
update_cache: yes
state: latest
- name: "Remove packages"
apt:
pkg: "{{ vars[inventory_hostname]['general']['host_removed_packages'] }}"
update_cache: yes
state: absent
- name: "Add keyrings directory"
file:
path: /etc/apt/keyrings
state: directory
mode: "755"
- name: "Add vector signed repositories"
ansible.builtin.get_url:
url: https://repositories.timber.io/public/vector/gpg.3543DB2D0A2BC4B8.key
dest: /etc/apt/keyrings/vector.asc
- name: "Add vector repository"
ansible.builtin.apt_repository:
repo: "deb [signed-by=/etc/apt/keyrings/vector.asc] https://repositories.timber.io/public/vector/deb/debian {{ ansible_distribution_release }} main"
state: present
- name: "Install vector"
ansible.builtin.apt:
name: vector
update_cache: yes
state: present
- name: "Host is modified"
include_tasks: host_modified.yml
when: build_host.host_changed
- name: "Copy machines scripts"
ansible.builtin.copy:
src: "{{ item }}"
dest: "/usr/local/sbin"
owner: "root"
group: "root"
mode: "0755"
loop: "{{ lookup('fileglob', 'sbin/*', wantlist=True) | list }}"
- name: "Remove dest images files"
file:
path: /var/lib/risotto/images_files
state: "{{ item }}"
mode: "0700"
with_items:
- absent
- directory
- name: "Copy images files"
unarchive:
remote_src: true
src: "/tmp/new_configurations/images_files.tar"
dest: "/var/lib/risotto/images_files"
- name: "Create versions directory"
file:
path: /var/lib/risotto/machines_informations
state: directory
mode: "0700"
- name: "Empty tests files"
file:
path: /var/lib/risotto/tests
state: "{{ item }}"
mode: "0700"
with_items:
- absent
- directory
- name: "Copy tests files"
unarchive:
remote_src: true
src: "/tmp/new_configurations/tests.tar"
dest: "/var/lib/risotto/tests"
when: copy_tests
- name: "Create TLS directory"
file:
path: /var/lib/risotto/tls
state: directory
mode: "755"

75
ansible/host_modified.yml Normal file
View file

@ -0,0 +1,75 @@
- name: "Stop services"
ansible.builtin.service:
name: "{{ item.value['doc'] }}"
state: stopped
when: item.value['manage'] and item.value['activate'] and item.value['doc'].endswith('.service') and not item.value['doc'].endswith('@.service') and item.value['engine'] and item.value['doc'] in services
loop: "{{ vars[inventory_hostname]['services'] | dict2items }}"
loop_control:
label: "{{ item.value['doc'] }}"
- name: "Remove old config files"
file:
path: /usr/local/lib/
state: "{{ item }}"
mode: "0700"
with_items:
- absent
- directory
- name: "Copy config files"
unarchive:
remote_src: true
src: "/tmp/new_configurations/host.tar"
dest: /usr/local/lib/
owner: root
group: root
- name: "Execute systemd-tmpfiles"
command: /usr/bin/systemd-tmpfiles --create --clean --remove -E --exclude-prefix=/tmp
- name: "Remove tmpfiles files directory"
local_action:
module: file
path: /usr/local/lib/tmpfiles.d/
state: absent
- name: "Reload systemd services configuration"
ansible.builtin.systemd:
daemon_reload: yes
- name: "Enable services"
when: item.value['manage'] and item.value['activate'] and '@.service' not in item.value['doc']
ansible.builtin.service:
name: "{{ item.value['doc'] }}"
enabled: yes
loop: "{{ vars[inventory_hostname]['services'] | dict2items }}"
loop_control:
label: "{{ item.value['doc'] }}"
- name: "Disable services"
when: item.value['manage'] and not item.value['activate'] and not item.value['undisable'] and '@.service' not in item.value['doc']
ansible.builtin.service:
name: "{{ item.value['doc'] }}"
enabled: no
loop: "{{ vars[inventory_hostname]['services'] | dict2items }}"
loop_control:
label: "{{ item.value['doc'] }}"
- name: "Start services"
when: item.value['manage'] and item.value['activate'] and item.value['doc'].endswith('.service') and not item.value['doc'].endswith('@.service') and item.value['engine']
ignore_errors: true
ansible.builtin.service:
name: "{{ item.value['doc'] }}"
state: started
loop: "{{ vars[inventory_hostname]['services'] | dict2items }}"
loop_control:
label: "{{ item.value['doc'] }}"
- name: "Restart services"
when: item.value['manage'] and item.value['activate'] and item.value['doc'].endswith('.service') and not item.value['doc'].endswith('@.service') and not item.value['engine']
ansible.builtin.service:
name: "{{ item.value['doc'] }}"
state: restarted
loop: "{{ vars[inventory_hostname]['services'] | dict2items }}"
loop_control:
label: "{{ item.value['doc'] }}"

136
ansible/inventory.py Executable file
View file

@ -0,0 +1,136 @@
#!/usr/bin/env python
'''
Example custom dynamic inventory script for Ansible, in Python.
'''
from argparse import ArgumentParser
from json import load as json_load, dumps, JSONEncoder
from os import remove
from os.path import isfile
from traceback import print_exc
from sys import stderr, argv
from risotto.machine import load, TIRAMISU_CACHE, VALUES_CACHE, INFORMATIONS_CACHE, ROUGAIL_NAMESPACE, ROUGAIL_NAMESPACE_DESCRIPTION
from tiramisu import Config
from tiramisu.error import PropertiesOptionError
from rougail.utils import normalize_family
from rougail import RougailSystemdTemplate, RougailConfig
from rougail.template.base import RougailLeader, RougailExtra
DEBUG = False
class RougailEncoder(JSONEncoder):
def default(self, obj):
if isinstance(obj, RougailLeader):
return obj._follower
if isinstance(obj, RougailExtra):
return obj._suboption
if isinstance(obj, PropertiesOptionError):
return 'PropertiesOptionError'
return JSONEncoder.default(self, obj)
class RisottoInventory(object):
def __init__(self):
parser = ArgumentParser()
parser.add_argument('--list', action='store_true')
parser.add_argument('--host', action='store')
parser.add_argument('--nocache', action='store_true')
parser.add_argument('--debug', action='store_true')
parser.add_argument('--pretty_print', action='store_true')
parser.add_argument('--quite', action='store_true')
self.args = parser.parse_args()
if self.args.debug:
global DEBUG
DEBUG = True
def run(self):
if self.args.list and self.args.host:
raise Exception('cannot have --list and --host together')
if self.args.list or self.args.nocache:
if isfile(TIRAMISU_CACHE):
remove(TIRAMISU_CACHE)
if isfile(VALUES_CACHE):
remove(VALUES_CACHE)
if isfile(INFORMATIONS_CACHE):
remove(INFORMATIONS_CACHE)
config = load(TIRAMISU_CACHE,
VALUES_CACHE,
INFORMATIONS_CACHE,
)
if self.args.list:
return self.do_inventory(config)
elif self.args.host:
return self.get_vars(config, self.args.host)
raise Exception('pfff')
def do_inventory(self,
config: Config,
) -> dict:
servers = [subconfig.doc() for subconfig in config.option.list('optiondescription') if subconfig.information.get('module') == 'host']
return dumps({
'group': {
'hosts': servers,
'vars': {
# FIXME
# 'ansible_ssh_host': '192.168.0.28',
'ansible_ssh_user': 'root',
'ansible_python_interpreter': '/usr/bin/python3'
}
}
})
def get_vars(self,
config: Config,
host_name: str,
) -> dict:
ret = {}
rougailconfig = RougailConfig.copy()
rougailconfig['variable_namespace'] = ROUGAIL_NAMESPACE
rougailconfig['variable_namespace_description'] = ROUGAIL_NAMESPACE_DESCRIPTION
for subconfig in config.option.list('optiondescription'):
server_name = subconfig.description()
module_name = subconfig.option(subconfig.information.get('provider:global:module_name')).value.get()
if module_name == 'host' and server_name != host_name:
continue
engine = RougailSystemdTemplate(subconfig, rougailconfig)
engine.load_variables(with_flatten=False)
if module_name != 'host' and engine.rougail_variables_dict['general']['host'] != host_name:
continue
ret[server_name] = engine.rougail_variables_dict
ret['modules'] = config.information.get('modules')
ret['delete_old_image'] = False
ret['configure_host'] = True
ret['only_machine'] = None
ret['copy_templates'] = False
ret['copy_tests'] = False
ret['host_install_dir'] = ret[host_name]['general']['host_install_dir']
return dumps(ret, cls=RougailEncoder)
# Get the inventory.
def main():
try:
inv = RisottoInventory()
values = inv.run()
if inv.args.pretty_print:
from pprint import pprint
from json import loads
pprint(loads(values))
elif not inv.args.quite:
print(values)
except Exception as err:
if DEBUG:
print_exc()
print('---', file=stderr)
extra=''
else:
extra=f'\nmore informations with commandline "{" ".join(argv)} --debug"'
print(f'{err}{extra}', file=stderr)
exit(1)
main()

View file

@ -0,0 +1,83 @@
#!/usr/bin/python3
from time import sleep
from os import fdopen, walk, readlink, chdir, getcwd
from os.path import join, islink, isdir
from ansible.module_utils.basic import AnsibleModule
def run_module():
# define available arguments/parameters a user can pass to the module
module_args = dict(
# shasums=dict(type='dict', required=True),
directories=dict(type='list', required=True),
)
# seed the result dict in the object
# we primarily care about changed and state
# changed is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
directories={},
)
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True,
)
current_path = getcwd()
for directory in module.params['directories']:
result['directories'][directory] = {}
if not isdir(directory):
continue
chdir(directory)
search_paths = [join(directory_[2:], f) for directory_, subdirectories, files in walk('.') for f in files]
for path in search_paths:
full_path = join(directory, path)
if not islink(full_path):
result['directories'][directory][path] = module.digest_from_file(full_path, 'sha256')
else:
result['directories'][directory][path] = readlink(full_path)
chdir(current_path)
# current_path = getcwd()
# for server_name, dico in module.params['shasums'].items():
# root = dico['config_dir']
# if not isdir(root):
# result['machines_changed'].append(server_name)
# continue
# chdir(root)
# search_paths = [join(directory[2:], f) for directory, subdirectories, files in walk('.') for f in files]
# chdir(current_path)
# for path in search_paths:
# if path in dico['shasums']:
# full_path = join(root, path)
# if not islink(full_path):
# if module.digest_from_file(full_path, 'sha256') != dico['shasums'][path]:
# result['machines_changed'].append(server_name)
# break
# elif dico['shasums'][path] != readlink(full_path):
# result['machines_changed'].append(server_name)
# break
# del dico['shasums'][path]
# else:
# result['machines_changed'].append(server_name)
# break
# if server_name not in result['machines_changed'] and dico['shasums']:
# result['machines_changed'].append(server_name)
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()

View file

@ -0,0 +1,219 @@
#!/usr/bin/python3
from time import sleep
from os import fdopen
from dbus import SystemBus, Array
from dbus.exceptions import DBusException
from subprocess import run
from ansible.module_utils.basic import AnsibleModule
def stop(bus, machines):
changed = False
remote_object = bus.get_object('org.freedesktop.machine1',
'/org/freedesktop/machine1',
False,
)
res = remote_object.ListMachines(dbus_interface='org.freedesktop.machine1.Manager')
started_machines = [str(r[0]) for r in res if str(r[0]) != '.host']
for host in machines:
if host not in started_machines:
continue
changed = True
remote_object.TerminateMachine(host, dbus_interface='org.freedesktop.machine1.Manager')
idx = 0
errors = []
while True:
res = remote_object.ListMachines(dbus_interface='org.freedesktop.machine1.Manager')
started_machines = [str(r[0]) for r in res if str(r[0]) != '.host']
for host in machines:
if host in started_machines:
break
else:
break
sleep(1)
idx += 1
if idx == 120:
errors.append('Cannot not stopped: ' + ','.join(started_machines))
break
return changed, errors
def start(bus, machines):
changed = False
remote_object = bus.get_object('org.freedesktop.machine1',
'/org/freedesktop/machine1',
False,
)
res = remote_object.ListMachines(dbus_interface='org.freedesktop.machine1.Manager')
started_machines = [str(r[0]) for r in res if str(r[0]) != '.host']
remote_object_system = bus.get_object('org.freedesktop.systemd1',
'/org/freedesktop/systemd1',
False,
)
for host in machines:
if host in started_machines:
continue
changed = True
service = f'systemd-nspawn@{host}.service'
remote_object_system.StartUnit(service, 'fail', dbus_interface='org.freedesktop.systemd1.Manager')
errors = []
idx = 0
while True:
res = remote_object.ListMachines(dbus_interface='org.freedesktop.machine1.Manager')
started_machines = [str(r[0]) for r in res if str(r[0]) != '.host']
for host in machines:
if host not in started_machines:
break
else:
break
sleep(1)
idx += 1
if idx == 120:
hosts = set(machines) - set(started_machines)
errors.append('Cannot not start: ' + ','.join(hosts))
break
if not errors:
idx = 0
for host in machines:
cmd = ['/usr/bin/systemctl', 'is-system-running']
error = False
while True:
try:
ret = []
res = remote_object.OpenMachineShell(host,
'',
cmd[0],
Array(cmd, signature='s'),
Array(['TERM=dumb'], signature='s'),
dbus_interface='org.freedesktop.machine1.Manager',
)
fd = res[0].take()
fh = fdopen(fd)
while True:
try:
ret.append(fh.readline().strip())
except OSError as err:
if err.errno != 5:
raise err from err
break
if not ret:
errors.append(f'Cannot check {host} status')
error = True
break
if ret[0] in ['running', 'degraded']:
break
except DBusException:
pass
idx += 1
sleep(1)
if idx == 120:
errors.append(f'Cannot not start {host} ({ret})')
break
if error:
continue
if ret and ret[0] == 'running':
continue
cmd = ['/usr/bin/systemctl', '--state=failed', '--no-legend', '--no-page']
res = remote_object.OpenMachineShell(host,
'',
cmd[0],
Array(cmd, signature='s'),
Array(['TERM=dumb'], signature='s'),
dbus_interface='org.freedesktop.machine1.Manager',
)
fd = res[0].take()
fh = fdopen(fd)
ret = []
idx2 = 0
while True:
try:
ret.append(fh.readline().strip())
except OSError as err:
if err.errno != 5:
raise err from err
break
idx2 += 1
if idx2 == 120:
errors.append(f'Cannot not get status to {host}')
break
errors.append(f'{host}: ' + '\n'.join(ret))
return changed, errors
def enable(machines):
cmd = ['/usr/bin/machinectl', 'enable'] + machines
run(cmd)
return True
def run_module():
# define available arguments/parameters a user can pass to the module
module_args = dict(
state=dict(type='str', required=True),
machines=dict(type='list', required=True),
tls_machine=dict(type='str', required=False),
)
# seed the result dict in the object
# we primarily care about changed and state
# changed is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
message=''
)
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
# if the user is working with this module in only check mode we do not
# want to make any changes to the environment, just return the current
# state with no modifications
if module.check_mode:
module.exit_json(**result)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
machines = module.params['machines']
tls_machine = module.params.get('tls_machine')
if module.params['state'] == 'stopped':
if tls_machine and tls_machine in machines:
machines.remove(tls_machine)
bus = SystemBus()
result['changed'], errors = stop(bus, machines)
if errors:
errors = '\n\n'.join(errors)
module.fail_json(msg=f'Some machines are not stopping correctly {errors}', **result)
elif module.params['state'] == 'started':
bus = SystemBus()
result['changed'], errors = start(bus, machines)
if errors:
errors = '\n\n'.join(errors)
module.fail_json(msg=f'Some machines are not running correctly {errors}', **result)
elif module.params['state'] == 'enabled':
result['changed'] = enable(machines)
else:
module.fail_json(msg=f"Unknown state: {module.params['state']}")
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()

2
ansible/machine.yml Normal file
View file

@ -0,0 +1,2 @@
- name: "Create informations for {{ item.name }}"
ansible.builtin.shell: "/usr/bin/echo {{ vars | modulename(item.name) }} > /var/lib/risotto/machines_informations/{{ item.name }}.image"

69
ansible/machines.yml Normal file
View file

@ -0,0 +1,69 @@
- name: "Rebuild images"
ansible.builtin.shell: "/usr/local/sbin/update_images {{ vars[vars['inventory_hostname']]['general']['tls_server'] }} do_not_start"
register: ret
failed_when: ret.rc != 0
- name: "Stop machine TLS"
machinectl:
state: stopped
machines: "{{ vars[vars['inventory_hostname']]['general']['tls_server'] }}"
when: vars[vars['inventory_hostname']]['general']['tls_server'] in machines_changed
- name: "Remove TLS files directory"
file:
path: "/var/lib/risotto/configurations/{{ vars[vars['inventory_hostname']]['general']['tls_server'] }}"
state: absent
when: vars[vars['inventory_hostname']]['general']['tls_server'] in machines_changed
- name: "Copy TLS configuration"
unarchive:
src: /tmp/new_configurations/machines.tar
dest: "/var/lib/risotto/configurations/"
include: "{{ vars[vars['inventory_hostname']]['general']['tls_server'] }}"
owner: root
group: root
when: vars[vars['inventory_hostname']]['general']['tls_server'] in machines_changed
- name: "Start machine TLS"
machinectl:
state: started
machines: "{{ vars[vars['inventory_hostname']]['general']['tls_server'] }}"
when: vars[vars['inventory_hostname']]['general']['tls_server'] in machines_changed
- name: "Stop machines with new configuration {{ machines_changed }}"
machinectl:
state: stopped
machines: "{{ machines_changed }}"
tls_machine: "{{ vars[vars['inventory_hostname']]['general']['tls_server'] }}"
- name: "Remove files directory"
file:
path: "/var/lib/risotto/configurations/{{ item }}"
state: absent
loop: "{{ machines_changed }}"
- name: "Copy configuration"
unarchive:
src: /tmp/new_configurations/machines.tar
dest: /var/lib/risotto/configurations/
owner: root
group: root
when: machines_changed
- name: "Enable machines"
machinectl:
state: enabled
machines: "{{ vars | machineslist(only_name=True) }}"
tls_machine: "{{ vars[vars['inventory_hostname']]['general']['tls_server'] }}"
- name: "Start machines"
machinectl:
state: started
machines: "{{ vars | machineslist(only_name=True) }}"
tls_machine: "{{ vars[vars['inventory_hostname']]['general']['tls_server'] }}"
- name: "Remove compressed files directory"
local_action:
module: file
path: /tmp/new_configurations
state: absent

1
ansible/password Symbolic link
View file

@ -0,0 +1 @@
../password/

35
ansible/playbook.yml Normal file
View file

@ -0,0 +1,35 @@
---
- name: Risotto
hosts: all
tasks:
- name: "Build host files"
rougail:
hostname: "{{ vars['inventory_hostname'] }}"
only_machine: "{{ only_machine }}"
configure_host: "{{ configure_host }}"
copy_tests: "{{ copy_tests }}"
copy_templates: "{{ copy_templates }}"
register: build_host
- name: "Change"
ansible.builtin.debug:
var: build_host
- name: "Configure the host"
include_tasks: host.yml
when: configure_host == true
- name: "Prepare machine configuration"
include_tasks: machine.yml
when: item.name in build_host.machines_changed
loop: "{{ vars | machineslist(only=only_machine) }}"
# - name: "Remove images"
# include_tasks: remove_image.yml
# loop: "{{ vars | machineslist(only=only_machine) }}"
# when: delete_old_image == true
#
- name: "Install and apply configurations"
include_tasks: machines.yml
vars:
machines_changed: "{{ build_host.machines_changed }}"

14
ansible/remove_image.yml Normal file
View file

@ -0,0 +1,14 @@
- name: "Stop machine {{ item.name }}"
machinectl:
state: stopped
machines: "{{ item.name }}"
- name: "Remove old machine {{ item.name }}"
file:
path: /var/lib/machines/{{ item.name }}
state: absent
- name: "Remove old image {{ vars | modulename(item.name) }}"
file:
path: "/var/lib/risotto/images/{{ vars | modulename(item.name) }}"
state: absent

27
ansible/sbin/backup_images Executable file
View file

@ -0,0 +1,27 @@
#!/bin/bash -ex
BACKUP_DIR="/root/backup"
MACHINES=""
for nspawn in $(ls /etc/systemd/nspawn/*.nspawn); do
nspawn_file=$(basename $nspawn)
machine=${nspawn_file%.*}
if [ -d "/var/lib/risotto/srv/$machine" ]; then
MACHINES="$MACHINES$machine "
fi
done
cd /var/lib/risotto/srv/
mkdir -p "$BACKUP_DIR"
for machine in $MACHINES; do
BACKUP_FILE="$BACKUP_DIR/backup_$machine.tar.bz2"
rm -f "$BACKUP_FILE"
if [ -f "/var/lib/risotto/configurations/$machine/sbin/risotto_backup" ]; then
machinectl -q shell $machine /usr/local/lib/sbin/risotto_backup
tar --ignore-failed-read -cJf $BACKUP_FILE $machine/backup
elif [ ! -f "/var/lib/risotto/configurations/$machine/no_risotto_backup" ]; then
tar --ignore-failed-read -cJf $BACKUP_FILE $machine
fi
done
exit 0

192
ansible/sbin/build_image Executable file
View file

@ -0,0 +1,192 @@
#!/bin/bash -ex
IMAGE_NAME=$1
if [ -z "$1" ]; then
ONLY_IF_DATASET_MODIF=false
else
ONLY_IF_DATASET_MODIF=true
fi
if [ -z "$IMAGE_NAME" ]; then
echo "PAS DE NOM DE MODULE"
exit 1
fi
# root dir configuration
RISOTTO_DIR="/var/lib/risotto"
RISOTTO_IMAGE_DIR="$RISOTTO_DIR/images"
# image configuration
IMAGE_BASE_RISOTTO_BASE_DIR="$RISOTTO_IMAGE_DIR/image_bases"
IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP="$RISOTTO_IMAGE_DIR/tmp/$IMAGE_NAME"
IMAGE_NAME_RISOTTO_IMAGE_DIR="$RISOTTO_IMAGE_DIR/$IMAGE_NAME"
IMAGE_DIR_RECIPIENT_IMAGE="$RISOTTO_DIR/images_files/$IMAGE_NAME"
rm -f /var/log/risotto/build_image.log
mkdir -p "$RISOTTO_IMAGE_DIR" "$RISOTTO_IMAGE_DIR/tmp/"
PKG=""
BASE_DIR=""
for script in $(ls "$IMAGE_DIR_RECIPIENT_IMAGE"/preinstall/*.sh 2> /dev/null); do
. "$script"
done
if [ -z "$OS_NAME" ]; then
echo "NO OS NAME DEFINED"
exit 1
fi
if [ -z "$RELEASEVER" ]; then
echo "NO RELEASEVER DEFINED"
exit 1
fi
if [ -z "$INSTALL_TOOL" ]; then
echo "NO INSTALL TOOL DEFINED"
exit 1
fi
BASE_NAME="$OS_NAME-$RELEASEVER"
BASE_DIR="$IMAGE_BASE_RISOTTO_BASE_DIR/$BASE_NAME"
TMP_BASE_DIR="$IMAGE_BASE_RISOTTO_BASE_DIR/tmp/$BASE_NAME"
BASE_PKGS_FILE="$IMAGE_BASE_RISOTTO_BASE_DIR-$BASE_NAME.pkgs"
BASE_LOCK="$IMAGE_BASE_RISOTTO_BASE_DIR-$BASE_NAME.build"
function dnf_opt_base() {
INSTALL_DIR=$1
echo "--setopt=install_weak_deps=False --setopt=fastestmirror=True --nodocs --noplugins --installroot=$INSTALL_DIR --releasever $RELEASEVER"
}
function dnf_opt() {
INSTALL_DIR=$1
INSTALL_PKG=$2
OPT=$(dnf_opt_base "$INSTALL_DIR")
echo "$OPT install $INSTALL_PKG"
}
function new_package_base() {
if [ "$INSTALL_TOOL" = "dnf" ]; then
OPT=$(dnf_opt "$TMP_BASE_DIR" "$BASE_PKG")
dnf --assumeno $OPT | grep ^" " > "$BASE_PKGS_FILE".new
else
debootstrap --include="$BASE_PKG" --variant=minbase "$RELEASEVER" "$TMP_BASE_DIR" >> /var/log/risotto/build_image.log
chroot "$TMP_BASE_DIR" dpkg-query -f '${binary:Package} ${source:Version}\n' -W > "$BASE_PKGS_FILE".new
fi
}
function install_base() {
if [ "$INSTALL_TOOL" = "dnf" ]; then
OPT=$(dnf_opt "$TMP_BASE_DIR" "$BASE_PKG")
dnf --assumeyes $OPT
fi
}
function new_package() {
if [ "$INSTALL_TOOL" = "dnf" ]; then
OPT=$(dnf_opt_base "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP")
set +e
dnf --assumeno $OPT update >> /var/log/risotto/build_image.log
set -e
OPT=$(dnf_opt "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP" "$PKG")
dnf --assumeno $OPT | grep ^" " > "$IMAGE_NAME_RISOTTO_IMAGE_DIR".pkgs.new
else
chroot "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP" apt update >> /var/log/risotto/build_image.log 2>&1
chroot "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP" apt install --no-install-recommends --yes $PKG -s 2>/dev/null|grep ^"Inst " > "$IMAGE_NAME_RISOTTO_IMAGE_DIR".pkgs.new
fi
}
function install_pkg() {
if [ "$INSTALL_TOOL" = "dnf" ]; then
OPT=$(dnf_opt "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP" "$PKG")
dnf --assumeyes $OPT
else
if [ "$ONLY_IF_DATASET_MODIF" = true ]; then
chroot "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP" apt update
fi
chroot "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP" bash -c "export DEBIAN_FRONTEND=noninteractive; apt install --no-install-recommends --yes $PKG"
fi
}
if [ ! -f "$BASE_LOCK" ] || [ ! -d "$BASE_DIR" ]; then
echo " - reinstallation de l'image de base"
new_package_base
diff -u "$BASE_PKGS_FILE" "$BASE_PKGS_FILE".new &> /dev/null && NEW_BASE=false || NEW_BASE=true
if [ ! -d "$BASE_DIR" ] || [ "$NEW_BASE" = true ]; then
mkdir -p "$IMAGE_BASE_RISOTTO_BASE_DIR"
rm -rf "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP"
install_base
if [ -f "$BASE_PKGS_FILE" ]; then
mv "$BASE_PKGS_FILE" "$BASE_PKGS_FILE".old
fi
mv "$BASE_PKGS_FILE".new "$BASE_PKGS_FILE"
rm -rf "$BASE_DIR"
mv "$TMP_BASE_DIR" "$BASE_DIR"
fi
touch "$BASE_LOCK"
fi
rm -rf "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP"
cp --reflink=auto -a "$BASE_DIR/" "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP"
if [ -n "$COPR" ]; then
#FIXME signature...
mkdir -p "$REPO_DIR"
cd "$REPO_DIR"
wget -q "$COPR"
cd - > /dev/null
fi
if [ "$FUSION" = true ]; then
dnf -y install "https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$RELEASEVER.noarch.rpm" --installroot="$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP" >> /var/log/risotto/build_image.log
fi
if [ -f "$IMAGE_NAME_RISOTTO_IMAGE_DIR".base.pkgs ] && [ -f "$IMAGE_NAME_RISOTTO_IMAGE_DIR".pkgs ]; then
echo " - différence(s) avec les paquets de base"
diff -u "$IMAGE_NAME_RISOTTO_IMAGE_DIR".base.pkgs "$BASE_PKGS_FILE" && INSTALL=false || INSTALL=true
[ ! -d "$IMAGE_NAME_RISOTTO_IMAGE_DIR" ] && INSTALL=true
else
INSTALL=true
fi
if [ "$ONLY_IF_DATASET_MODIF" = false ] || [ ! -f "$IMAGE_NAME_RISOTTO_IMAGE_DIR".pkgs ]; then
new_package
else
cp --reflink=auto "$IMAGE_NAME_RISOTTO_IMAGE_DIR".pkgs "$IMAGE_NAME_RISOTTO_IMAGE_DIR".pkgs.new
fi
if [ "$INSTALL" = false ]; then
echo " - différence(s) avec les paquets de l'image"
diff -u "$IMAGE_NAME_RISOTTO_IMAGE_DIR".pkgs "$IMAGE_NAME_RISOTTO_IMAGE_DIR".pkgs.new && INSTALL=false || INSTALL=true
fi
find "$IMAGE_DIR_RECIPIENT_IMAGE" -type f -exec md5sum '{}' \; > "$IMAGE_NAME_RISOTTO_IMAGE_DIR".md5sum.new
if [ "$INSTALL" = false ]; then
echo " - différence(s) du dataset"
diff -u "$IMAGE_NAME_RISOTTO_IMAGE_DIR".md5sum "$IMAGE_NAME_RISOTTO_IMAGE_DIR".md5sum.new && INSTALL=false || INSTALL=true
fi
if [ "$INSTALL" = true ]; then
echo " - installation"
if [ -f "$IMAGE_NAME_RISOTTO_IMAGE_DIR".version ]; then
VERSION=$(cat "$IMAGE_NAME_RISOTTO_IMAGE_DIR".version)
else
VERSION=0
fi
if [ -d "$IMAGE_NAME_RISOTTO_IMAGE_DIR" ]; then
cd "$IMAGE_NAME_RISOTTO_IMAGE_DIR"
make_changelog "$IMAGE_NAME" "$VERSION" "$OS_NAME" "$RELEASEVER" > "$IMAGE_NAME_RISOTTO_IMAGE_DIR"_"$RELEASEVER"_"$VERSION"_changelog.md
cd - > /dev/null
fi
install_pkg
sleep 2
for script in $(ls $IMAGE_DIR_RECIPIENT_IMAGE/postinstall/*.sh 2> /dev/null); do
. "$script"
done
ROOT=$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP make_volatile /etc
if [ ! "$?" = 0 ]; then
echo "make_volatile failed"
exit 1
fi
rm -rf "$IMAGE_NAME_RISOTTO_IMAGE_DIR"
mv "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP" "$IMAGE_NAME_RISOTTO_IMAGE_DIR"
cp --reflink=auto -f "$BASE_PKGS_FILE" "$IMAGE_NAME_RISOTTO_IMAGE_DIR".base.pkgs
mv -f "$IMAGE_NAME_RISOTTO_IMAGE_DIR".pkgs.new "$IMAGE_NAME_RISOTTO_IMAGE_DIR".pkgs
mv -f "$IMAGE_NAME_RISOTTO_IMAGE_DIR".md5sum.new "$IMAGE_NAME_RISOTTO_IMAGE_DIR".md5sum
VERSION=$((VERSION + 1))
echo "$VERSION" > "$IMAGE_NAME_RISOTTO_IMAGE_DIR".version
fi
rm -rf "$IMAGE_NAME_RISOTTO_IMAGE_DIR_TMP"
echo " => OK"
exit 0

36
ansible/sbin/compare_image Executable file
View file

@ -0,0 +1,36 @@
#!/bin/bash
SRV=$1
if [ -z "$SRV" ]; then
echo "usage: $0 machine"
exit 1
fi
dirname="/var/lib/risotto/templates/$SRV"
if [ ! -d "$dirname" ]; then
echo "cannot find $dirname"
echo "usage: $0 machine"
exit 1
fi
cd $dirname
find -type f -not -path "./secrets/*" -not -path "./tmpfiles.d/*" -not -path "./sysusers.d/*" -not -path "./systemd/*" -not -path "./tests/*" -not -path "./etc/pki/*" | while read a; do
machine_path="/var/lib/machines/$SRV"
cfile="$machine_path/usr/share/factory/$a"
if [ -f "$cfile" ]; then
diff -u "$dirname/$a" "$cfile"
else
FIRST_LINE="$(head -n 1 $a)"
if [[ "$FIRST_LINE" == "#RISOTTO: file://"* ]]; then
other=${FIRST_LINE:16}
diff -u "$dirname/$a" "$machine_path$other"
elif [[ "$FIRST_LINE" == "#RISOTTO: https://"* ]]; then
other=${FIRST_LINE:10}
echo $other
wget -q $other -O /tmp/template.tmp
diff -u "$dirname/$a" /tmp/template.tmp
elif [ ! "$FIRST_LINE" = "#RISOTTO: do not compare" ]; then
echo "cannot find \"$cfile\" ($dirname/$a)"
fi
fi
done
cd - > /dev/null

52
ansible/sbin/diagnose Executable file
View file

@ -0,0 +1,52 @@
#!/bin/bash -e
MACHINES=""
for nspawn in $(ls /etc/systemd/nspawn/*.nspawn); do
nspawn_file=$(basename $nspawn)
machine=${nspawn_file%.*}
MACHINES="$MACHINES$machine "
done
STARTED=""
DEGRADED=""
found=true
idx=0
while [ $found = true ]; do
found=false
echo "tentative $idx"
for machine in $MACHINES; do
if ! echo $STARTED | grep -q " $machine "; then
status=$(machinectl -q shell $machine /usr/bin/systemctl is-system-running 2>/dev/null || echo "not started")
if echo "$status" | grep -q degraded; then
STARTED="$STARTED $machine "
DEGRADED="$DEGRADED $machine"
elif echo "$status" | grep -q running; then
STARTED="$STARTED $machine "
else
found=true
echo "status actuel de $machine : $status"
fi
fi
done
sleep 2
idx=$((idx+1))
if [ $idx = 60 ]; then
break
fi
done
retcode=0
for machine in $MACHINES; do
if ! echo "$STARTED" | grep -q " $machine "; then
echo
echo "========= $machine"
machinectl -q shell $machine /usr/bin/systemctl is-system-running 2>/dev/null || systemctl status systemd-nspawn@$machine.service || true
fi
done
echo $DEGRADED
for machine in $DEGRADED; do
echo
echo "========= $machine"
machinectl -q shell $machine /usr/bin/systemctl --state=failed --no-legend --no-pager
retcode=1
done
exit $retcode

182
ansible/sbin/make_changelog Executable file
View file

@ -0,0 +1,182 @@
#!/usr/bin/env python3
import logging
from dnf.conf import Conf
from dnf.cli.cli import BaseCli, Cli
from dnf.cli.output import Output
from dnf.cli.option_parser import OptionParser
from dnf.i18n import _, ucd
from datetime import datetime, timezone
from sys import argv
from os import getcwd, unlink
from os.path import isfile, join
from glob import glob
from subprocess import run
# List new or removed file
def read_dnf_pkg_file(os_name, filename1, filename2):
if os_name == 'debian':
idx_pkg = 0, 1
idx_version = 1, 2
header_idx = 0, 0
else:
idx_pkg = 0, 0
idx_version = 2, 2
header_idx = 2, 2
pass
pkgs = {}
for fidx, filename in enumerate((filename1, filename2)):
if not isfile(filename):
continue
with open(filename, 'r') as pkgs_fh:
for idx, pkg_line in enumerate(pkgs_fh.readlines()):
if idx < header_idx[fidx]:
# header
continue
sp_line = pkg_line.strip().split()
if len(sp_line) < idx_version[fidx] + 1:
continue
pkg = sp_line[idx_pkg[fidx]]
version = sp_line[idx_version[fidx]]
#if pkg in pkgs:
# raise Exception(f'package already set {pkg}?')
if os_name == 'debian' and version.startswith('('):
version = version[1:]
pkgs[pkg] = version
return pkgs
def list_packages(title, packages, packages_info):
print(f'# {title}\n')
if not packages:
print('*Aucun*')
packages = list(packages)
packages = sorted(packages)
for idx, pkg in enumerate(packages):
print(f' - {pkg} ({packages_info[pkg]})')
print()
# List updated packages
class CustomOutput(Output):
def listPkgs(self, *args, **kwargs):
# do not display list
pass
def format_changelog_markdown(changelog):
"""Return changelog formatted as in spec file"""
text = '\n'.join([f' {line}' for line in changelog['text'].split('\n')])
chlog_str = ' - %s %s\n\n%s\n' % (
changelog['timestamp'].strftime("%a %b %d %X %Y"),
ucd(changelog['author']),
ucd(text))
return chlog_str
def print_changelogs_markdown(packages):
# group packages by src.rpm to avoid showing duplicate changelogs
self = BASE
bysrpm = dict()
for p in packages:
# there are packages without source_name, use name then.
bysrpm.setdefault(p.source_name or p.name, []).append(p)
for source_name in sorted(bysrpm.keys()):
bin_packages = bysrpm[source_name]
print('- ' + _("Changelogs for {}").format(', '.join([str(pkg) for pkg in bin_packages])))
print()
for chl in self.latest_changelogs(bin_packages[0]):
print(format_changelog_markdown(chl))
def dnf_update(image_name, releasever):
conf = Conf()
# obsoletes are already listed
conf.obsoletes = False
with BaseCli(conf) as base:
global BASE
BASE = base
base.print_changelogs = print_changelogs_markdown
custom_output = CustomOutput(base.output.base, base.output.conf)
base.output = custom_output
cli = Cli(base)
image_dir = join(getcwd(), image_name)
cli.configure(['--setopt=install_weak_deps=False', '--nodocs', '--noplugins', '--installroot=' + image_dir, '--releasever', releasever, 'check-update', '--changelog'], OptionParser())
logger = logging.getLogger("dnf")
for h in logger.handlers:
logger.removeHandler(h)
logger.addHandler(logging.NullHandler())
cli.run()
def main(os_name, image_name, old_version, releasever):
date = datetime.now(timezone.utc).isoformat()
if old_version == 0:
title = f"Création de l'image {image_name}"
subtitle = f"Les paquets de la première image {image_name} sur base Fedora {releasever}"
else:
title = f"Nouvelle version de l'image {image_name}"
subtitle = f"Différence des paquets de l'image {image_name} sur base Fedora {releasever} entre la version {old_version} et {old_version + 1}"
print(f"""+++
title = "{title}"
description = "{subtitle}"
date = {date}
updated = {date}
draft = false
template = "blog/page.html"
[taxonomies]
authors = ["Automate"]
[extra]
lead = "{subtitle}."
type = "installe"
+++
""")
new_dict = read_dnf_pkg_file(os_name, f'/var/lib/risotto/images/image_bases-{os_name}-{releasever}.pkgs', f'/var/lib/risotto/images/{image_name}.pkgs.new')
new_pkg = new_dict.keys()
old_file = f'/var/lib/risotto/images/{image_name}.pkgs'
if not old_version or not isfile(old_file):
list_packages('Liste des paquets', new_pkg, new_dict)
else:
ori_dict = read_dnf_pkg_file(os_name, f'/var/lib/risotto/images/{image_name}.base.pkgs', old_file)
ori_pkg = ori_dict.keys()
list_packages('Les paquets supprimés', ori_pkg - new_pkg, ori_dict)
list_packages('Les paquets ajoutés', new_pkg - ori_pkg, new_dict)
print('# Les paquets mises à jour\n')
if os_name == 'fedora':
dnf_update(image_name, releasever)
else:
for filename in glob('*.deb'):
unlink(filename)
for package in ori_pkg & new_dict:
if ori_dict[package] == new_dict[package]:
continue
info = run(['apt', 'download', package], capture_output=True)
if info.returncode:
raise Exception(f'cannot download {package}: {info}')
packages = list(glob('*.deb'))
packages.sort()
for package in packages:
info = run(['chroot', '.', 'apt-listchanges', '--which', 'both', '-f', 'text', package], capture_output=True)
if info.returncode:
raise Exception(f'cannot list changes for {package}: {info}')
header = True
for line in info.stdout.decode().split('\n'):
if not header:
print(line)
if line.startswith('-----------------------'):
header = False
print()
unlink(package)
if __name__ == "__main__":
image_name = argv[1]
old_version = int(argv[2])
os_name = argv[3]
releasever = argv[4]
main(os_name, image_name, old_version, releasever)

76
ansible/sbin/make_volatile Executable file
View file

@ -0,0 +1,76 @@
#!/bin/bash -e
if [ -z $ROOT ]; then
echo "PAS DE ROOT"
exit 1
fi
echo "$ROOT"
DESTDIR="$ROOT/usr/lib/tmpfiles.d"
CONF_DST="/usr/share/factory"
EXCLUDES="^($ROOT/etc/passwd|$ROOT/etc/group|$ROOT/etc/.updated|$ROOT/etc/.pwd.lock|$ROOT/etc/systemd/network/dhcp.network|$ROOT/etc/sudoers.d/qemubuild)$"
ONLY_COPY="^($ROOT/etc/localtime)$"
FORCE_LINKS="^($ROOT/etc/udev/hwdb.bin)$"
function execute() {
chroot $ROOT $@
}
function file_dir_in_tmpfiles() {
letter=$1
directory=$2
local_directory=$(echo $directory|sed "s@^$ROOT@@g")
mode=$(execute "/usr/bin/stat" "--format" "%a" "$local_directory" | grep -o "[0-9.]\+")
user=$(execute "/usr/bin/stat" "--format" "%U" "$local_directory" | grep -o "[0-9a-zA-Z.-]\+")
group=$(execute "/usr/bin/stat" "--format" "%G" "$local_directory" | grep -o "[0-9a-zA-Z.-]\+")
echo "$letter $local_directory $mode $user $group - -"
}
function calc_symlink_in_tmpfiles() {
dest_name=$1
local_dest_name=$2
src_file=$(readlink "$dest_name")
symlink_in_tmpfiles "$local_dest_name" "$src_file"
}
function symlink_in_tmpfiles() {
dest_name=$1
src_file=$2
echo "L+ $dest_name - - - - $src_file"
}
function main() {
dir_config_orig=$1
name="${dir_config_orig//\//-}"
dir_config_orig=$ROOT$dir_config_orig
mkdir -p "$DESTDIR"
mkdir -p "$ROOTCONF_DST$dir_config_orig"
systemd_conf="$DESTDIR/risotto$name.conf"
rm -f $systemd_conf
shopt -s globstar
for src_file in $dir_config_orig/**; do
local_src=$(echo $src_file|sed "s@$ROOT@@g")
dest_file="$ROOT$CONF_DST$local_src"
if [[ "$src_file" =~ $EXCLUDES ]]; then
echo "$src_file: exclude" >&2
elif [[ -L "$src_file" ]]; then
calc_symlink_in_tmpfiles "$src_file" "$local_src" >> $systemd_conf
elif [[ "$src_file" =~ $FORCE_LINKS ]]; then
symlink_in_tmpfiles "$src_file" "$dest_file" >> $systemd_conf
elif [[ -d "$src_file" ]]; then
file_dir_in_tmpfiles 'd' "$src_file" >> $systemd_conf
[[ ! -d "$dest_file" ]] && mkdir -p "$dest_file"
#echo "$src_file: directory ok"
else
if [[ ! "$src_file" =~ $ONLY_COPY ]]; then
file_dir_in_tmpfiles "C" "$src_file" >> $systemd_conf
fi
[[ -e "$dest_file" ]] && rm -f "$dest_file"
# not a symlink... an hardlink
ln "$src_file" "$dest_file"
#echo "$src_file: file ok"
fi
done
}
main "$1"
echo "fin"
exit 0

30
ansible/sbin/test_images Executable file
View file

@ -0,0 +1,30 @@
#!/bin/bash
QUIT_ON_ERROR=true
# QUIT_ON_ERROR=false
CONFIG_DIR="/var/lib/risotto/configurations"
INFO_DIR="/var/lib/risotto/machines_informations"
TEST_DIR="/var/lib/risotto/tests"
TEST_DIR_NAME="tests"
if [ ! -d /var/lib/risotto/tests/ ]; then
echo "no tests directory"
exit 1
fi
py_test_option="-s"
if [ "$QUIT_ON_ERROR" = true ]; then
set -e
py_test_option="$py_test_option -x"
fi
for nspawn in $(ls /etc/systemd/nspawn/*.nspawn); do
nspawn_file=$(basename $nspawn)
machine=${nspawn_file%.*}
image=$(cat $INFO_DIR/$machine.image)
imagedir=$TEST_DIR/$image
machine_test_dir=$CONFIG_DIR/$machine/$TEST_DIR_NAME
export MACHINE_TEST_DIR=$machine_test_dir
echo "- $machine"
py.test-3 $py_test_option "$imagedir"
done

87
ansible/sbin/update_images Executable file
View file

@ -0,0 +1,87 @@
#!/bin/bash -e
TLS_SERVER=$1
if [ -z "$TLS_SERVER" ]; then
echo "$0 nom_tls_server"
exit 1
fi
DO_NOT_START=$2
REBOOT_EVERY_MONDAY=$3
# root dir configuration
RISOTTO_DIR="/var/lib/risotto"
RISOTTO_IMAGE_DIR="$RISOTTO_DIR/images"
# image configuration
IMAGE_BASE_RISOTTO_BASE_DIR="$RISOTTO_IMAGE_DIR/image_bases"
if [ -z "$1" ]; then
rm -f $IMAGE_BASE_RISOTTO_BASE_DIR*.build
fi
mkdir -p /var/log/risotto
ls /var/lib/risotto/images_files/ | while read image; do
if [ -d /var/lib/risotto/images_files/"$image" ]; then
echo
echo "Install image $image" | tee -a /var/log/risotto/update_images.log
/usr/local/sbin/build_image "$image" || echo "PROBLEME" | tee -a /var/log/risotto/update_images.log
fi
done
idx=0
if [ -z "$DO_NOT_START" ]; then
machinectl reboot "$TLS_SERVER" || machinectl start "$TLS_SERVER"
while true; do
status=$(machinectl -q shell "$TLS_SERVER" /usr/bin/systemctl is-system-running 2>/dev/null || echo "not started")
if echo "$status" | grep -q degraded || echo "$status" | grep -q running; then
break
fi
idx=$((idx+1))
if [ $idx = 60 ]; then
echo "le serveur $TLS_SERVER n'a pas encore redémarré"
break
fi
sleep 2
done
fi
MACHINES=""
for nspawn in $(ls /etc/systemd/nspawn/*.nspawn); do
nspawn_file=$(basename "$nspawn")
machine=${nspawn_file%.*}
MACHINES="$MACHINES$machine "
MACHINE_MACHINES_DIR="/var/lib/machines/$machine"
IMAGE_NAME_RISOTTO_IMAGE_NAME="$(cat $RISOTTO_DIR/machines_informations/$machine.image)"
MACHINE_INFO="$RISOTTO_DIR/machines_informations/"
VERSION_MACHINE="$MACHINE_INFO/$machine.version"
if [ -n "$REBOOT_EVERY_MONDAY" ] && [ "$(date +%u)" = 1 ]; then
# update TLS certificate every monday, so stop container
machinectl stop "$machine" 2> /dev/null || true
while true; do
machinectl status "$machine" > /dev/null 2>&1 || break
sleep 1
done
fi
if [ ! -d "$MACHINE_MACHINES_DIR/etc" ]; then
rm -f "$VERSION_MACHINE"
fi
diff -q "$RISOTTO_IMAGE_DIR/$IMAGE_NAME_RISOTTO_IMAGE_NAME".version "$VERSION_MACHINE" &> /dev/null || (
echo "Reinstall machine $machine"
machinectl stop "$machine" 2> /dev/null || true
while true; do
machinectl status "$machine" > /dev/null 2>&1 || break
sleep 1
done
rm -rf "$MACHINE_MACHINES_DIR"
mkdir "$MACHINE_MACHINES_DIR"
cp -a --reflink=auto "$RISOTTO_IMAGE_DIR/$IMAGE_NAME_RISOTTO_IMAGE_NAME/"* "$MACHINE_MACHINES_DIR"
cp -a --reflink=auto "$RISOTTO_IMAGE_DIR/$IMAGE_NAME_RISOTTO_IMAGE_NAME".version "$VERSION_MACHINE"
)
done
if [ -z "$DO_NOT_START" ]; then
echo "start $MACHINES"
machinectl start $MACHINES
sleep 5
journalctl -n 100 --no-pager
diagnose
fi
exit 0

35
doc/README.md Normal file
View file

@ -0,0 +1,35 @@
![Logo Risotto](../logo.png "logo risotto")
# Risotto
## A dataset
- [Dataset example](dataset_example/dataset.md)
- [Official dataset](https://cloud.silique.fr/gitea/risotto/dataset/src/branch/main/seed/README.md)
## Infrastructure
- [Infrastructure](infrastructure.md)
- [Examples](dataset_example/infrastructure.md)
## risotto.conf
```toml
[directories]
datasets = ['<path_to_dataset_base>/seed']
dest = 'installations'
dest_templates = 'templates'
[cert_authority]
email = '<email>'
country = 'FR'
locality = 'Dijon'
state = 'France'
org_name = 'Silique'
org_unit_name = 'Cloud'
```
## Usage
![Schema](schema.png "Schéma")

405
doc/authentification.svg Normal file
View file

@ -0,0 +1,405 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="210mm"
height="297mm"
viewBox="0 0 210 297"
version="1.1"
id="svg5"
inkscape:version="1.2.1 (9c6d41e410, 2022-07-14)"
sodipodi:docname="authentification.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<sodipodi:namedview
id="namedview7"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:document-units="mm"
showgrid="false"
inkscape:zoom="1.557211"
inkscape:cx="355.12207"
inkscape:cy="181.73517"
inkscape:window-width="2048"
inkscape:window-height="1083"
inkscape:window-x="0"
inkscape:window-y="0"
inkscape:window-maximized="1"
inkscape:current-layer="layer1"
inkscape:showpageshadow="2"
inkscape:deskcolor="#d1d1d1" />
<defs
id="defs2">
<rect
x="465.27745"
y="390.53444"
width="155.19784"
height="121.34324"
id="rect9339" />
<marker
style="overflow:visible"
id="marker47493"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Sstart"
inkscape:isstock="true">
<path
transform="matrix(0.2,0,0,0.2,1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path47491" />
</marker>
<marker
style="overflow:visible"
id="marker46179"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Send"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Send"
inkscape:isstock="true"
viewBox="0 0 3.4652294 2.5981128"
markerWidth="3.465229"
markerHeight="2.5981126"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(-0.3,0,0,-0.3,0.69,0)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path45715" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Sstart"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Sstart"
inkscape:isstock="true"
viewBox="0 0 3.4652294 2.5981128"
markerWidth="3.465229"
markerHeight="2.5981126"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(0.3,0,0,0.3,-0.69,0)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path45712" />
</marker>
<marker
style="overflow:visible;"
id="Arrow1Send"
refX="0.0"
refY="0.0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="scale(0.2) rotate(180) translate(6,0)"
style="fill-rule:evenodd;fill:context-stroke;stroke:context-stroke;stroke-width:1.0pt;"
d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
id="path45697" />
</marker>
<marker
style="overflow:visible"
id="Arrow1Sstart"
refX="0.0"
refY="0.0"
orient="auto"
inkscape:stockid="Arrow1Sstart"
inkscape:isstock="true">
<path
transform="scale(0.2) translate(6,0)"
style="fill-rule:evenodd;fill:context-stroke;stroke:context-stroke;stroke-width:1.0pt"
d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
id="path45694" />
</marker>
<marker
style="overflow:visible;"
id="Arrow1Lend"
refX="0.0"
refY="0.0"
orient="auto"
inkscape:stockid="Arrow1Lend"
inkscape:isstock="true">
<path
transform="scale(0.8) rotate(180) translate(12.5,0)"
style="fill-rule:evenodd;fill:context-stroke;stroke:context-stroke;stroke-width:1.0pt;"
d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
id="path45685" />
</marker>
<marker
style="overflow:visible"
id="marker46179-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true"
viewBox="0 0 4.4434635 2.539122"
markerWidth="4.4434633"
markerHeight="2.5391221"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-6" />
</marker>
<marker
style="overflow:visible"
id="marker46179-3-6"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-6-2" />
</marker>
<marker
style="overflow:visible"
id="marker46179-1"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true"
viewBox="0 0 4.4434635 2.539122"
markerWidth="4.4434633"
markerHeight="2.5391221"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-8" />
</marker>
<marker
style="overflow:visible"
id="marker46179-9"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-2" />
</marker>
<marker
style="overflow:visible"
id="marker46179-9-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-2-7" />
</marker>
<marker
style="overflow:visible"
id="marker46179-3-6-9"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-6-2-2" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Sstart-8"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Sstart"
inkscape:isstock="true">
<path
transform="matrix(0.3,0,0,0.3,-0.69,0)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path45712-9" />
</marker>
<marker
style="overflow:visible"
id="marker46179-3-6-9-7"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-6-2-2-3" />
</marker>
<marker
style="overflow:visible"
id="marker46179-3-6-4"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true"
viewBox="0 0 4.4434635 2.539122"
markerWidth="4.4434633"
markerHeight="2.5391221"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-6-2-7" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Send-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Send"
inkscape:isstock="true"
viewBox="0 0 3.4652294 2.5981128"
markerWidth="3.465229"
markerHeight="2.5981126"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(-0.3,0,0,-0.3,0.69,0)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path45715-6" />
</marker>
</defs>
<g
inkscape:label="Calque 1"
inkscape:groupmode="layer"
id="layer1">
<ellipse
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:5.86795;stroke-linecap:square;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="path986"
cx="62.406502"
cy="64.119804"
rx="4.5660253"
ry="4.5660257" />
<ellipse
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:5.86795;stroke-linecap:square;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="path986-6-5"
cx="142.10091"
cy="64.120003"
rx="4.5660253"
ry="4.5660257" />
<text
xml:space="preserve"
style="font-size:4.93889px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;stroke-width:0.264583"
x="63.8083"
y="77.278412"
id="text3135"><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;text-align:center;text-anchor:middle;stroke-width:0.264583"
x="63.8083"
y="77.278412"
id="tspan10911">IMAP</tspan></text>
<text
xml:space="preserve"
style="font-size:4.93889px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;stroke-width:0.264583"
x="135.16226"
y="78.779442"
id="text3135-7-6"><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;stroke-width:0.264583"
x="135.16226"
y="78.779442"
id="tspan10911-3-2">LDAP</tspan></text>
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Send)"
d="M 75.106998,57.935664 H 122.14408"
id="path46175-02"
sodipodi:nodetypes="cc" />
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Send-3)"
d="M 75.340285,68.861719 H 122.37734"
id="path46175-02-7"
sodipodi:nodetypes="cc" />
<path
inkscape:connector-curvature="0"
style="opacity:0.98;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.175296;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
d="m 94.078762,40.916652 c -0.0389,2.57e-4 -0.0774,5.85e-4 -0.11611,0.0022 -1.77767,0.0113 -3.37259,1.592182 -3.37712,3.374607 -0.0202,0.420172 0.007,0.840654 0,1.260955 -0.28547,-7e-6 -0.57094,-7e-6 -0.85649,0 0.023,1.846787 -0.0461,3.697844 0.036,5.54194 0.17721,1.1875 1.24351,2.26136 2.48695,2.203553 1.36149,-0.0022 2.72716,0.04211 4.086269,-0.0275 1.275754,-0.219817 2.171678,-1.529827 2.074938,-2.797815 0.0144,-1.639617 0,-3.279313 0.007,-4.918966 -0.284237,-0.0072 -0.568484,0.005 -0.852724,-0.0036 0.0216,-0.998381 0.0684,-2.089696 -0.500617,-2.955111 -0.615417,-1.026965 -1.788466,-1.688137 -2.987566,-1.680443 z m 0.0165,1.425752 c 1.01001,0.01389 2.00786,0.850284 1.97878,1.902665 0.0202,0.436339 0.0331,0.872937 0.0425,1.309642 -1.35875,-5.85e-4 -2.71751,0.0022 -4.07619,-0.0022 0.007,-0.683077 -0.17908,-1.429948 0.19471,-2.044983 0.33945,-0.651636 1.01793,-1.150287 1.76284,-1.163575 0.0324,-0.0015 0.0648,-0.0022 0.0974,-0.0015 z"
id="path3355" />
<rect
style="fill:none;fill-rule:evenodd;stroke:#040000;stroke-width:1.27229;stroke-linecap:square;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="rect8947"
width="29.594231"
height="6.274775"
x="79.703773"
y="82.478172" />
<text
xml:space="preserve"
style="font-size:11.9191px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;stroke-width:0.15963"
x="80.080597"
y="91.894714"
id="text9263"><tspan
sodipodi:role="line"
id="tspan9261"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:11.9191px;font-family:'Abyssinica SIL';-inkscape-font-specification:'Abyssinica SIL Bold';stroke-width:0.15963"
x="80.080597"
y="91.894714">*****</tspan></text>
<rect
style="fill:none;fill-rule:evenodd;stroke:#040000;stroke-width:1.27229;stroke-linecap:square;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="rect8947-5"
width="29.594231"
height="6.274775"
x="79.942833"
y="74.1101" />
<text
xml:space="preserve"
style="font-size:7.05556px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;stroke-width:0.15963"
x="80.848824"
y="79.293304"
id="text9263-6"><tspan
sodipodi:role="line"
id="tspan9261-2"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:7.05556px;font-family:'Abyssinica SIL';-inkscape-font-specification:'Abyssinica SIL Bold';stroke-width:0.15963"
x="80.848824"
y="79.293304">domaine</tspan></text>
<text
xml:space="preserve"
transform="scale(0.26458333)"
id="text9337"
style="font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;white-space:pre;shape-inside:url(#rect9339);display:inline" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 15 KiB

View file

@ -0,0 +1,87 @@
# Risotto dataset simple examples
This tutorial aims to show how create a dataset to deploy a [Caddy](https://caddyserver.com/) server via Risotto.
Attention it has no other virtues than to be educational. It is not intended for production use.
See [Rougail documentation for more details about dictionaries, templates and patches](https://cloud.silique.fr/gitea/risotto/rougail/src/branch/main/doc/README.md).
The project can be divided into three application services:
- caddy-common: an application service containing the information common to the two other application services
- caddy-https: a standalone http/https server
- caddy-https-rp: a https only server served behind a reverse proxy
## caddy-common
Start by creating the project tree:
```
seed/caddy-common/
├── dictionaries
├── templates
└── manual
   └── image
   └── preinstall
```
Then describe the application service in [seed/caddy-common/applicationservice.yml](seed/caddy-common/applicationservice.yml).
Also a dictionary [seed/caddy-common/dictionaries/20-caddy.yml](seed/caddy-common/dictionaries/20-caddy.yml) with
- the activation of the caddy service in the "multi-user" target. This service needs some templates:
- the main configuration's [/etc/caddy/Caddyfile](seed/caddy-common/templates/Caddyfile) to include other /etc/caddy/Caddyfile.d/\*.caddyfile
- /etc/caddy/Caddyfile.d/risotto.caddyfile with appropriate configuration (this file is not part of this application service)
- a [sysusers](https://www.freedesktop.org/software/systemd/man/sysusers.d.html) file [/sysusers.d/0caddy.conf](seed/caddy-common/templates/sysuser-caddy.conf) to create the system user "caddy"
- a [tmpfiles](https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html) file [/tmpfiles.d/0caddy.conf](seed/caddy-common/templates/tmpfile-caddy.conf) to create the directory "caddy_root_directory" and volatile directory "/var/lib/caddy"
- a family "caddy" (Caddy web server) with a filename variable "caddy_root_directory" (The root path of the site) with default value "/srv/caddy".
Finally, create a script to build the image with the caddy package: [seed/caddy-common/manual/image/preinstall/caddy.sh](seed/caddy-common/manual/image/preinstall/caddy.sh).
## caddy-https
Start by creating the project tree:
```
seed/caddy-https-rp/
├── dictionaries
└── templates
```
Then describe the application service in [seed/caddy-https/applicationservice.yml](seed/caddy-https/applicationservice.yml) with OS and caddy-common dependencies.
Also create a dictionary [seed/caddy-https/dictionaries/25-caddy.yml](seed/caddy-https/dictionaries/25-caddy.yml) to define the variables:
- caddy_domain: the domain where Caddy should listen to
- caddy_ca_file, caddy_crt_file and caddy_key_file: certificat for this domain
- redefine the variable incoming_ports to open the ports 80 and 443
And new templates:
- [seed/caddy-https/templates/risotto.caddyfile](seed/caddy-https/templates/risotto.caddyfile)
- [seed/caddy-https/templates/ca_HTTP.crt](seed/caddy-https/templates/ca_HTTP.crt)
- [seed/caddy-https/templates/caddy.key](seed/caddy-https/templates/caddy.key)
- [seed/caddy-https/templates/caddy.crt](seed/caddy-https/templates/caddy.crt)
## caddy-https-rp
Start by creating the project tree:
```
seed/caddy-https-rp/
├── dictionaries
├── patches
└── templates
```
Then describe the application service in [seed/caddy-https-rp/applicationservice.yml](seed/caddy-https-rp/applicationservice.yml) with OS, caddy-common and reverse-proxy-client dependencies.
By default, reverse proxy certificate is only readable by "root" user. In the dictionary [seed/caddy-https-rp/dictionaries/25-caddy.yml](seed/caddy-https-rp/dictionaries/25-caddy.yml) we change the user to "caddy".
And add Caddy configuration's file [seed/caddy-https-rp/templates/risotto.caddyfile](seed/caddy-https-rp/templates/risotto.caddyfile).
This template use mainly variables defined in reverse-proxy application service.
Finally add a patch to modify Caddyfile to not starts Caddy in port 80: [seed/caddy-https-rp/patches/Caddyfile.patch](seed/caddy-https-rp/patches/Caddyfile.patch).
Patches should only use if a template file is define in an other dataset. You should instead add a condition in the template. But for educational reasons we made a patch in this case.

View file

@ -0,0 +1,38 @@
# Examples
## Caddy as HTTPS server
The [servers.yml](servers.caddy-https.yml):
- we create only the zone "external"
- we create a module "caddy"
- we define an host "host.example.net":
- servers are containerized with [machined](https://freedesktop.org/wiki/Software/systemd/machined/), so service application is "host-systemd-machined"
- the provide application service is "provider-systemd-machined"
- we define a server "caddy"
## Caddy behind a Nginx reverse proxy
The [servers.yml](servers.caddy-https-rp.yml):
- we create the zone "external" and a zone "revprox" between "revprox" and "caddy" servers
- we create three module:
- "revprox": the reverse proxy (with "letsencrypt" application service if needed)
- "nsd": to manage local DNS name
- "caddy"
- we define an host "host.example.net":
- servers are containerized with [machined](https://freedesktop.org/wiki/Software/systemd/machined/), so service application is "host-systemd-machined"
- the provide application service is "provider-systemd-machined"
- we define servers:
- revprox in zones "external" and "revprox"
- nsd in zone "revprox"
- caddy in zone "revprox"
You must add a index.html file in "/var/lib/risotto/srv/caddy.in.example.net/caddy/".

View file

@ -0,0 +1,2 @@
format: '0.1'
description: Caddy's common files

View file

@ -0,0 +1,25 @@
services:
- service:
- name: caddy
target: multi-user
file:
- text: /etc/caddy/Caddyfile
engine: 'none'
- text: /etc/caddy/Caddyfile.d/risotto.caddyfile
- text: /sysusers.d/0caddy.conf
source: sysuser-caddy.conf
engine: 'none'
- text: /tmpfiles.d/0caddy.conf
source: tmpfile-caddy.conf
engine: 'none'
variables:
- family:
- name: caddy
description: Caddy web server
variables:
- variable:
- name: caddy_root_directory
type: filename
description: The root path of the site
value:
- text: /srv/caddy

View file

@ -0,0 +1 @@
PKG="$PKG caddy"

View file

@ -0,0 +1,43 @@
# The Caddyfile is an easy way to configure your Caddy web server.
#
# https://caddyserver.com/docs/caddyfile
#>GNUNUX
# Global options
{
# remove administration tool
admin off
}
#<GNUNUX
# The configuration below serves a welcome page over HTTP on port 80. To use
# your own domain name with automatic HTTPS, ensure your A/AAAA DNS record is
# pointing to this machine's public IP, then replace `http://` with your domain
# name. Refer to the documentation for full instructions on the address
# specification.
#
# https://caddyserver.com/docs/caddyfile/concepts#addresses
#GNUNUX http:// {
# Set this path to your site's directory.
#GNUNUX root * /usr/share/caddy
# Enable the static file server.
#GNUNUX file_server
# Another common task is to set up a reverse proxy:
# reverse_proxy localhost:8080
# Or serve a PHP site through php-fpm:
# php_fastcgi localhost:9000
# Refer to the directive documentation for more options.
# https://caddyserver.com/docs/caddyfile/directives
#GNUNUX}
# As an alternative to editing the above site block, you can add your own site
# block files in the Caddyfile.d directory, and they will be included as long
# as they use the .caddyfile extension.
import Caddyfile.d/*.caddyfile

View file

@ -0,0 +1,2 @@
g caddy 998 -
u caddy 998:998 "Caddy web server" /var/lib/caddy /sbin/nologin

View file

@ -0,0 +1,2 @@
d /var/lib/caddy 750 caddy caddy - -
d %%caddy_root_directory 750 root caddy - -

View file

@ -0,0 +1,6 @@
format: '0.1'
description: Caddy
depends:
- base-fedora-36
- reverse-proxy-client
- caddy-common

View file

@ -0,0 +1,9 @@
variables:
- family:
- name: revprox
variables:
- variable:
- name: revprox_client_cert_owner
redefine: true
value:
- text: caddy

View file

@ -0,0 +1 @@
PKG="$PKG caddy"

View file

@ -0,0 +1,11 @@
--- a/Caddyfile 2022-12-21 11:51:32.834081202 +0100
+++ b/Caddyfile 2022-12-21 11:51:26.354030537 +0100
@@ -7,6 +7,8 @@
{
# remove administration tool
admin off
+ # do not start caddy on port 80
+ auto_https disable_redirects
}
#<GNUNUX

View file

@ -0,0 +1,20 @@
# listen to all reverse proxy domains
%for %%domain in %%revprox_client_external_domainnames
https://%%domain {
# import reverse proxy certificate
# do not try to check zerossl and let's encrypt file
tls %%revprox_client_cert_file %%revprox_client_key_file {
ca_root %%revprox_client_ca_file
}
# log to the console
log {
output stdout
format console
level info
}
# root directory
root * %%caddy_root_directory
# it's a file server
file_server
}
%end for

View file

@ -0,0 +1,2 @@
g caddy 998 -
u caddy 998:998 "Caddy web server" /var/lib/caddy /sbin/nologin

View file

@ -0,0 +1,2 @@
d /srv/caddy 750 root caddy - -
d /var/lib/caddy 750 caddy caddy - -

View file

@ -0,0 +1,5 @@
format: '0.1'
description: Caddy as standalone HTTPs serveur
depends:
- base-fedora-36
- caddy-common

View file

@ -0,0 +1,72 @@
services:
- service:
- name: caddy
file:
- file_type: variable
text: caddy_ca_file
source: ca_HTTP.crt
- file_type: variable
text: caddy_crt_file
source: caddy.crt
- file_type: variable
text: caddy_key_file
source: caddy.key
variables:
- family:
- name: network
variables:
- variable:
- name: incoming_ports
redefine: true
value:
- text: 80
- text: 443
- name: caddy
variables:
- variable:
- name: caddy_domain
type: domainname
description: Domain name
- name: caddy_ca_file
type: filename
description: Caddy CA filename
hidden: true
- name: caddy_key_file
type: filename
description: Caddy private key filename
hidden: true
- name: caddy_crt_file
type: filename
description: Caddy public key filename
hidden: true
constraints:
- fill:
- name: calc_value
param:
- type: variable
text: tls_ca_directory
- text: ca_HTTP.crt
- name: join
text: /
target:
- text: caddy_ca_file
- fill:
- name: calc_value
param:
- type: variable
text: tls_cert_directory
- text: caddy.crt
- name: join
text: /
target:
- text: caddy_crt_file
- fill:
- name: calc_value
param:
- type: variable
text: tls_key_directory
- text: caddy.key
- name: join
text: /
target:
- text: caddy_key_file

View file

@ -0,0 +1 @@
PKG="$PKG caddy"

View file

@ -0,0 +1,57 @@
# The Caddyfile is an easy way to configure your Caddy web server.
#
# https://caddyserver.com/docs/caddyfile
# The configuration below serves a welcome page over HTTP on port 80. To use
# your own domain name with automatic HTTPS, ensure your A/AAAA DNS record is
# pointing to this machine's public IP, then replace `http://` with your domain
# name. Refer to the documentation for full instructions on the address
# specification.
#
# https://caddyserver.com/docs/caddyfile/concepts#addresses
#>GNUNUX
#http:// {
#listen only in https
{
admin off
}
%for %%domain in %%revprox_client_external_domainnames
https://%%domain {
tls %%revprox_client_cert_file %%revprox_client_key_file {
ca_root %%revprox_client_ca_file
}
log {
output stdout
format console
level info
}
#<GNUNUX
# Set this path to your site's directory.
#>GNUNUX
# root * /usr/share/caddy
root * /srv/caddy
#<GNUNUX
# Enable the static file server.
file_server
# Another common task is to set up a reverse proxy:
# reverse_proxy localhost:8080
# Or serve a PHP site through php-fpm:
# php_fastcgi localhost:9000
# Refer to the directive documentation for more options.
# https://caddyserver.com/docs/caddyfile/directives
}
%end for
# As an alternative to editing the above site block, you can add your own site
# block files in the Caddyfile.d directory, and they will be included as long
# as they use the .caddyfile extension.
#GNUNUX import Caddyfile.d/*.caddyfile

View file

@ -0,0 +1 @@
%%get_chain(cn=%%caddy_domain, authority_cn=%%caddy_domain, authority_name="HTTP", hide=%%hide_secret)

View file

@ -0,0 +1 @@
%%get_certificate(%%caddy_domain, 'HTTP', type="server", hide=%%hide_secret)

View file

@ -0,0 +1 @@
%%get_private_key(cn=%%caddy_domain, authority_name='HTTP', type="server", hide=%%hide_secret)

View file

@ -0,0 +1,18 @@
# listen to all reverse proxy domains
https://%%caddy_domain {
# use certificate
# do not try to check zerossl and let's encrypt file
tls %%caddy_crt_file %%caddy_key_file {
ca_root %%caddy_ca_file
}
# log to the console
log {
output stdout
format console
level info
}
# root directory
root * %%caddy_root_directory
# it's a file server
file_server
}

View file

@ -0,0 +1,2 @@
g caddy 998 -
u caddy 998:998 "Caddy web server" /var/lib/caddy /sbin/nologin

View file

@ -0,0 +1,2 @@
d /srv/caddy 750 root caddy - -
d /var/lib/caddy 750 caddy caddy - -

View file

@ -0,0 +1,48 @@
zones:
external:
network: 192.168.45.0/24
host_ip: 192.168.45.1
start_ip: 192.168.45.10
domain_name: in.example.net
revprox:
network: 192.168.46.0/24
host_ip: 192.168.46.1
start_ip: 192.168.46.10
domain_name: revprox.in.example.net
modules:
revprox:
- nginx-reverse-proxy
- letsencrypt
nsd:
- nsd
caddy:
- caddy-https-rp
hosts:
host.example.net:
applicationservices:
- host-systemd-machined
applicationservice_provider: provider-systemd-machined
values:
general.network.interfaces.interface_names:
- ens3
general.network.output_interface: ens3
servers:
nsd:
module: nsd
informations:
zones_name:
- revprox
revprox:
module: revprox
informations:
zones_name:
- external
- revprox
caddy:
module: caddy
informations:
zones_name:
- revprox
values:
general.revprox.revprox_client.revprox_client_external_domainnames:
- caddy.example.net

View file

@ -0,0 +1,26 @@
zones:
external:
network: 192.168.45.0/24
host_ip: 192.168.45.1
start_ip: 192.168.45.10
domain_name: in.example.net
modules:
caddy:
- caddy-https
hosts:
host.example.net:
applicationservices:
- host-systemd-machined
applicationservice_provider: provider-systemd-machined
values:
general.network.interfaces.interface_names:
- ens3
general.network.output_interface: ens3
servers:
caddy:
module: caddy
informations:
zones_name:
- external
values:
general.caddy.caddy_domain: caddy.example.net

BIN
doc/example_smtp.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

466
doc/example_smtp.svg Normal file
View file

@ -0,0 +1,466 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="178.66008mm"
height="172.7524mm"
viewBox="0 0 178.66008 172.7524"
version="1.1"
id="svg5"
inkscape:version="1.2.1 (9c6d41e410, 2022-07-14)"
sodipodi:docname="example_smtp.svg"
inkscape:export-filename="example_smtp.png"
inkscape:export-xdpi="149.26"
inkscape:export-ydpi="149.26"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<sodipodi:namedview
id="namedview7"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:document-units="mm"
showgrid="false"
inkscape:zoom="0.55055723"
inkscape:cx="173.46062"
inkscape:cy="599.39273"
inkscape:window-width="1920"
inkscape:window-height="1011"
inkscape:window-x="0"
inkscape:window-y="0"
inkscape:window-maximized="1"
inkscape:current-layer="layer1"
inkscape:showpageshadow="2"
inkscape:deskcolor="#d1d1d1" />
<defs
id="defs2">
<marker
style="overflow:visible"
id="marker47493"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Sstart"
inkscape:isstock="true">
<path
transform="matrix(0.2,0,0,0.2,1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path47491" />
</marker>
<marker
style="overflow:visible"
id="marker46179"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Send"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Send"
inkscape:isstock="true"
viewBox="0 0 3.4652294 2.5981128"
markerWidth="3.465229"
markerHeight="2.5981126"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(-0.3,0,0,-0.3,0.69,0)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path45715" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Sstart"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Sstart"
inkscape:isstock="true"
viewBox="0 0 3.4652294 2.5981128"
markerWidth="3.465229"
markerHeight="2.5981126"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(0.3,0,0,0.3,-0.69,0)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path45712" />
</marker>
<marker
style="overflow:visible"
id="Arrow1Send"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path45697" />
</marker>
<marker
style="overflow:visible"
id="Arrow1Sstart"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Sstart"
inkscape:isstock="true">
<path
transform="matrix(0.2,0,0,0.2,1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path45694" />
</marker>
<marker
style="overflow:visible"
id="Arrow1Lend"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Lend"
inkscape:isstock="true">
<path
transform="matrix(-0.8,0,0,-0.8,-10,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path45685" />
</marker>
<marker
style="overflow:visible"
id="marker46179-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true"
viewBox="0 0 4.4434635 2.539122"
markerWidth="4.4434633"
markerHeight="2.5391221"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-6" />
</marker>
<marker
style="overflow:visible"
id="marker46179-3-6"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-6-2" />
</marker>
<marker
style="overflow:visible"
id="marker46179-1"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true"
viewBox="0 0 4.4434635 2.539122"
markerWidth="4.4434633"
markerHeight="2.5391221"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-8" />
</marker>
<marker
style="overflow:visible"
id="marker46179-9"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-2" />
</marker>
<marker
style="overflow:visible"
id="marker46179-9-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-2-7" />
</marker>
<marker
style="overflow:visible"
id="marker46179-3-6-9"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-6-2-2" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Sstart-8"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Sstart"
inkscape:isstock="true">
<path
transform="matrix(0.3,0,0,0.3,-0.69,0)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path45712-9" />
</marker>
<marker
style="overflow:visible"
id="marker46179-3-6-9-7"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-6-2-2-3" />
</marker>
<marker
style="overflow:visible"
id="marker46179-3-6-4"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow1Send"
inkscape:isstock="true"
viewBox="0 0 4.4434635 2.539122"
markerWidth="4.4434633"
markerHeight="2.5391221"
preserveAspectRatio="xMidYMid">
<path
transform="matrix(-0.2,0,0,-0.2,-1.2,0)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 0,0 5,-5 -12.5,0 5,5 Z"
id="path46177-6-2-7" />
</marker>
</defs>
<g
inkscape:label="Calque 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(-15.292364,-14.109702)">
<rect
style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:#f6f7d7;stroke-width:0.600001;stroke-linecap:round;stroke-linejoin:round;paint-order:fill markers stroke"
id="rect443"
width="178.06007"
height="172.15239"
x="15.592364"
y="14.409702" />
<circle
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:4;stroke-linecap:square;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="path846"
cx="96.73632"
cy="103.80212"
r="52.962326" />
<ellipse
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:5.86795;stroke-linecap:square;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="path986"
cx="62.406502"
cy="64.119804"
rx="4.5660253"
ry="4.5660257" />
<ellipse
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:5.86795;stroke-linecap:square;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="path986-6"
cx="62.407001"
cy="144.45392"
rx="4.5660253"
ry="4.5660257" />
<ellipse
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:5.86795;stroke-linecap:square;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="path986-6-5"
cx="98.457001"
cy="79.992493"
rx="4.5660253"
ry="4.5660257" />
<ellipse
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:5.86795;stroke-linecap:square;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="path986-6-5-7"
cx="98.45739"
cy="122.40948"
rx="4.5660253"
ry="4.5660257" />
<ellipse
style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:5.86795;stroke-linecap:square;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="path986-6-5-9"
cx="149.40425"
cy="102.7455"
rx="4.5660253"
ry="4.5660257" />
<text
xml:space="preserve"
style="font-size:4.93889px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;stroke-width:0.264583"
x="37.39616"
y="61.122501"
id="text3135"><tspan
sodipodi:role="line"
id="tspan3133"
style="font-weight:bold;font-size:4.93889px;text-align:center;text-anchor:middle;stroke-width:0.264583"
x="37.39616"
y="61.122501">IMAP (993)</tspan><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;text-align:center;text-anchor:middle;stroke-width:0.264583"
x="37.39616"
y="67.296112"
id="tspan10911">SMTP (587)</tspan></text>
<text
xml:space="preserve"
style="font-size:4.93889px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;stroke-width:0.264583"
x="62.512436"
y="157.8011"
id="text3135-7"><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;text-align:center;text-anchor:middle;stroke-width:0.264583"
x="62.512436"
y="157.8011"
id="tspan10911-3">SMTP</tspan><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;text-align:center;text-anchor:middle;stroke-width:0.264583"
x="62.512436"
y="163.97472"
id="tspan51838">relay</tspan><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;text-align:center;text-anchor:middle;stroke-width:0.264583"
x="62.512436"
y="170.14833"
id="tspan51842">(25)</tspan></text>
<text
xml:space="preserve"
style="font-size:4.93889px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;stroke-width:0.264583"
x="91.762589"
y="70.189774"
id="text3135-7-6"><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;stroke-width:0.264583"
x="91.762589"
y="70.189774"
id="tspan10911-3-2">LDAP</tspan></text>
<text
xml:space="preserve"
style="font-size:4.93889px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;stroke-width:0.264583"
x="162.68114"
y="114.31043"
id="text3135-7-6-1"><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;text-align:center;text-anchor:middle;stroke-width:0.264583"
x="162.68114"
y="114.31043"
id="tspan10911-3-2-2">DNS</tspan><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;text-align:center;text-anchor:middle;stroke-width:0.264583"
x="162.68114"
y="120.48405"
id="tspan21295">Résolver</tspan></text>
<text
xml:space="preserve"
style="font-size:4.93889px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;stroke-width:0.264583"
x="98.739502"
y="135.19682"
id="text3135-7-6-1-0"><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;text-align:center;text-anchor:middle;stroke-width:0.264583"
x="98.739502"
y="135.19682"
id="tspan10911-3-2-2-9">DNS</tspan><tspan
sodipodi:role="line"
style="font-weight:bold;font-size:4.93889px;text-align:center;text-anchor:middle;stroke-width:0.264583"
x="98.739502"
y="141.37044"
id="tspan22983">autoritaire</tspan></text>
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Send)"
d="M 98.411,89.863152 V 107.09425"
id="path46175"
sodipodi:nodetypes="cc" />
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Send)"
d="m 69.168475,71.05256 15.523919,4.588191"
id="path46175-02"
sodipodi:nodetypes="cc" />
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow2Sstart);marker-end:url(#Arrow2Send)"
d="m 58.918881,78.428313 c 0,0 -7.642846,11.083665 -8.1137,23.703427 -0.554549,14.86295 7.598141,26.95783 7.598141,26.95783"
id="path46175-02-5"
sodipodi:nodetypes="csc" />
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Send)"
d="m 73.470622,64.314043 c 0,0 23.019562,-13.687982 46.104948,-0.359501 9.42693,5.44269 13.02345,12.067909 16.41683,17.107652 3.97188,5.898933 4.72416,9.274399 4.72416,9.274399"
id="path46175-7"
sodipodi:nodetypes="cssc" />
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Send)"
d="m 140.00042,103.64348 -27.82831,13.38506"
id="path46175-0"
sodipodi:nodetypes="cc" />
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Send)"
d="m 158.873,103.231 19.41573,1.3e-4"
id="path46175-0-6"
sodipodi:nodetypes="cc" />
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Send)"
d="M 38.595,36.174542 51.766,51.796621"
id="path46175-0-6-8"
sodipodi:nodetypes="cc" />
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow2Sstart);marker-end:url(#Arrow2Send)"
d="M 51.766,154.77542 38.595,168.15219"
id="path46175-0-6-2-6"
sodipodi:nodetypes="cc" />
<path
style="fill:none;stroke:#000000;stroke-width:4;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow2Send)"
d="m 73.436962,143.41602 c 0,0 17.397816,13.36128 44.232048,1.48383 16.06906,-7.11254 23.91983,-29.57648 23.91983,-29.57648"
id="path57407"
sodipodi:nodetypes="csc" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 18 KiB

41
doc/infrastructure.md Normal file
View file

@ -0,0 +1,41 @@
# Infrastructure
The infrastructure is define in a uniq YAML file: servers.yml:
## Zones
The idea:
- separate the networks according to the uses
- there is no route to each other
Ideally only one area has an Internet access.
Internet access is, in fact, firewall rules.
This network is usually called "external".
The other networks are only there for the communication between server and client.
The host must have an IP in this network.
IP inside this network are deliver automaticly.
A network is call a "zone".
## Modules
A module is simply a list of application services. An system image is build with informations define in application service.
## Hosts
A host is a server on which container or VM are running.
Define the host means define:
- application services to configure the host and VM
- application service provider to define the provider to apply on each VM
- values to adapt the configuration
- servers, the list of VM with :
- the corresponding module
- informations (like zone)
- values
Host must only be a Debian 11 (Bullseye) from now.

BIN
doc/schema.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 323 KiB

3750
doc/schema.svg Normal file

File diff suppressed because it is too large Load diff

After

Width:  |  Height:  |  Size: 167 KiB

233
funcs.py
View file

@ -1,233 +0,0 @@
from tiramisu import valid_network_netmask, valid_ip_netmask, valid_broadcast, valid_in_network, valid_not_equal as valid_differ, valid_not_equal, calc_value
from ipaddress import ip_address
from os.path import dirname, abspath, join as _join, isdir as _isdir, isfile as _isfile
from typing import List
from json import load
from secrets import token_urlsafe as _token_urlsafe
from rougail.utils import normalize_family
from risotto.utils import multi_function, CONFIGS
from risotto.x509 import gen_cert as _x509_gen_cert, gen_ca as _x509_gen_ca, gen_pub as _x509_gen_pub, has_pub as _x509_has_pub
# =============================================================
# fork of risotto-setting/src/risotto_setting/config/config.py
with open('servers.json', 'r') as server_fh:
ZONES_SERVER = load(server_fh)
ZONES = None
DOMAINS = None
HERE = dirname(abspath(__file__))
def load_zones():
global ZONES
if ZONES is not None:
return
ZONES = ZONES_SERVER['zones']
for server_name, server in ZONES_SERVER['servers'].items():
if 'informations' not in server:
continue
server_zones = server['informations']['zones_name']
server_extra_domainnames = server['informations'].get('extra_domainnames', [])
if len(server_zones) > 1 and len(server_zones) != len(server_extra_domainnames) + 1:
raise Exception(f'the server "{server_name}" has more that one zone, please set correct number of extra_domainnames ({len(server_zones) - 1} instead of {len(server_extra_domainnames)})')
for idx, zone_name in enumerate(server_zones):
zone_domain_name = ZONES[zone_name]['domain_name']
if idx == 0:
zone_server_name = server_name
else:
zone_server_name = server_extra_domainnames[idx - 1]
server_domain_name = zone_server_name.split('.', 1)[1]
if zone_domain_name and zone_domain_name != server_domain_name:
raise Exception(f'wrong server_name "{zone_server_name}" in zone "{zone_name}" should ends with "{zone_domain_name}"')
ZONES[zone_name].setdefault('hosts', []).append(server_name)
def load_domains():
load_zones()
global DOMAINS
if DOMAINS is not None:
return
DOMAINS = {}
for zone_name, zone in ZONES_SERVER['zones'].items():
if 'domain_name' in zone:
hosts = []
ips = []
for host in ZONES[zone_name].get('hosts', []):
hosts.append(host.split('.', 1)[0])
ips.append(get_ip(host, [zone_name], 0))
DOMAINS[zone['domain_name']] = (tuple(hosts), tuple(ips))
def get_ip(server_name: str,
zones_name: List[str],
index: str,
) -> str:
if server_name is None:
return
load_zones()
index = int(index)
zone_name = zones_name[index]
if zone_name not in ZONES:
raise ValueError(f"cannot set IP in unknown zone '{zone_name}'")
zone = ZONES[zone_name]
if server_name not in zone['hosts']:
raise ValueError(f"cannot set IP in unknown server '{server_name}'")
server_index = zone['hosts'].index(server_name)
# print(server_name, zones_name, index, str(ip_address(zone['start_ip']) + server_index))
return str(ip_address(zone['start_ip']) + server_index)
@multi_function
def get_chain(authority_cn,
authority_name,
):
if not authority_name or authority_name is None:
if isinstance(authority_name, list):
return []
return
if not isinstance(authority_cn, list):
is_list = False
authority_cn = [authority_cn]
else:
is_list = True
authorities = []
for auth_cn in authority_cn:
ret = _x509_gen_ca(auth_cn,
authority_name,
HERE,
)
if not is_list:
return ret
authorities.append(ret)
return authorities
@multi_function
def get_certificate(cn,
authority_name,
authority_cn=None,
extra_domainnames=[],
type='server',
):
if isinstance(cn, list) and extra_domainnames:
raise Exception('cn cannot be a list with extra_domainnames set')
if not cn or authority_name is None:
if isinstance(cn, list):
return []
return
return _x509_gen_cert(cn,
extra_domainnames,
authority_cn,
authority_name,
type,
'crt',
HERE,
)
@multi_function
def get_private_key(cn,
authority_name=None,
authority_cn=None,
type='server',
):
if not cn:
if isinstance(cn, list):
return []
return
if authority_name is None:
if _x509_has_pub(cn, HERE):
return _x509_gen_pub(cn,
'key',
HERE,
)
if isinstance(cn, list):
return []
return
return _x509_gen_cert(cn,
[],
authority_cn,
authority_name,
type,
'key',
HERE,
)
def get_public_key(cn):
if not cn:
return
return _x509_gen_pub(cn,
'pub',
HERE,
)
def zone_information(zone_name: str,
type: str,
multi: bool=False,
index: int=None,
) -> str:
if not zone_name:
return
if type == 'gateway' and index != 0:
return
load_zones()
if zone_name not in ZONES:
raise ValueError(f"cannot get zone informations in unknown zone '{zone_name}'")
zone = ZONES[zone_name]
if type not in zone:
raise ValueError(f"unknown type '{type}' in zone '{zone_name}'")
value = zone[type]
if multi:
value = [value]
return value
def get_internal_zones() -> List[str]:
load_domains()
return list(DOMAINS.keys())
@multi_function
def get_zones_info(type: str) -> str:
ret = []
for data in ZONES_SERVER['zones'].values():
ret.append(data[type])
return ret
@multi_function
def get_internal_zone_names() -> List[str]:
load_zones()
return list(ZONES.keys())
def get_internal_zone_information(zone: str,
info: str,
) -> str:
load_domains()
if info == 'cidr':
return ZONES[zone]['gateway'] + '/' + ZONES[zone]['network'].split('/')[-1]
return ZONES[zone][info]
def get_internal_info_in_zone(zone: str,
auto: bool,
type: str,
index: int=None,
) -> List[str]:
if not auto:
return
for domain_name, domain in DOMAINS.items():
if zone == domain_name:
if type == 'host':
return list(domain[0])
else:
return domain[1][index]
# =============================================================

BIN
logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.4 KiB

115
logo.svg Normal file
View file

@ -0,0 +1,115 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="37.297001mm"
height="38.629002mm"
viewBox="0 0 37.297 38.629002"
version="1.1"
id="svg5"
inkscape:version="1.2.1 (9c6d41e410, 2022-07-14)"
sodipodi:docname="logo.svg"
inkscape:export-filename="logo.png"
inkscape:export-xdpi="149.26"
inkscape:export-ydpi="149.26"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<sodipodi:namedview
id="namedview7"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:document-units="mm"
showgrid="false"
inkscape:zoom="4.404458"
inkscape:cx="63.685475"
inkscape:cy="75.378174"
inkscape:window-width="1920"
inkscape:window-height="1011"
inkscape:window-x="0"
inkscape:window-y="0"
inkscape:window-maximized="1"
inkscape:current-layer="layer1"
inkscape:showpageshadow="2"
inkscape:deskcolor="#d1d1d1" />
<defs
id="defs2" />
<g
inkscape:label="Calque 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(-75.0784,-36.897831)">
<rect
style="fill:#f6f7d7;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:2.04884;stroke-linecap:square;paint-order:fill markers stroke"
id="rect8118"
width="37.297001"
height="38.629002"
x="75.0784"
y="36.897831" />
<rect
style="fill:none;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:1.5;stroke-linecap:square;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="rect19192"
width="37.29702"
height="38.628922"
x="75.0784"
y="36.897831" />
<rect
style="fill:#008700;fill-opacity:1;fill-rule:evenodd;stroke:#008700;stroke-width:1.84143;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="rect18918"
width="29.788723"
height="6.963315"
x="78.625404"
y="40.178349" />
<rect
style="fill:#008700;fill-opacity:1;fill-rule:evenodd;stroke:#008700;stroke-width:1.84143;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="rect18918-5"
width="29.788723"
height="6.963315"
x="78.625114"
y="65.0494" />
<text
xml:space="preserve"
style="font-size:10.5833px;line-height:1.15;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#4d4d4d;stroke-width:0.265;stroke-miterlimit:4;stroke-dasharray:none"
x="93.5"
y="47.586319"
id="text2080"><tspan
sodipodi:role="line"
id="tspan2078"
style="font-weight:bold;text-align:center;text-anchor:middle;fill:#f6f7d7;fill-opacity:1;stroke-width:0.265;stroke-miterlimit:4;stroke-dasharray:none"
x="93.5"
y="47.586319">RIS</tspan><tspan
sodipodi:role="line"
style="font-weight:bold;text-align:center;text-anchor:middle;fill:#4d4d4d;stroke-width:0.265;stroke-miterlimit:4;stroke-dasharray:none"
x="93.5"
y="59.757114"
id="tspan2082">OTTO</tspan><tspan
sodipodi:role="line"
style="font-weight:bold;text-align:center;text-anchor:middle;fill:#f6f7d7;fill-opacity:1;stroke-width:0.265;stroke-miterlimit:4;stroke-dasharray:none"
x="93.5"
y="71.92791"
id="tspan7995" /></text>
<circle
style="fill:#008700;fill-opacity:1;fill-rule:evenodd;stroke:#f6f7d7;stroke-width:0.56;stroke-linecap:square;stroke-dasharray:none;stroke-opacity:1;paint-order:fill markers stroke"
id="path19218"
cx="103.00674"
cy="68.43734"
r="1.7277808" />
<path
style="fill:#f6f7d7;fill-opacity:1;stroke:#f6f7d7;stroke-width:0.56;stroke-linecap:round;stroke-linejoin:round;stroke-dasharray:none;stroke-opacity:1"
d="M 82.1984,66.707831 H 95.287674"
id="path19357" />
<path
style="fill:#f6f7d7;fill-opacity:1;stroke:#f6f7d7;stroke-width:0.6;stroke-linecap:round;stroke-linejoin:round;stroke-dasharray:none;stroke-opacity:1"
d="M 82.1984,70.167831 H 95.287664"
id="path19357-6" />
<path
style="fill:#f6f7d7;fill-opacity:1;stroke:#f6f7d7;stroke-width:0.56;stroke-linecap:round;stroke-linejoin:round;stroke-dasharray:none;stroke-opacity:1"
d="M 82.1984,68.45114 H 95.287664"
id="path19357-5" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 4.8 KiB

View file

@ -1,3 +1,4 @@
[directories]
dataset = '/home/gnunux/git/risotto/dataset/seed'
dest = 'installations'
dest_templates = 'templates'

368
sbin/risotto_auto_doc Executable file
View file

@ -0,0 +1,368 @@
#!/usr/bin/env python3
from os import listdir
from os.path import isdir, join
from tabulate import tabulate
from sys import argv
from rougail import RougailConfig
from rougail.convert import RougailConvert
from rougail.objspace import RootRougailObject
from risotto.utils import EXTRA_ANNOTATORS, ROUGAIL_NAMESPACE, ROUGAIL_NAMESPACE_DESCRIPTION
from risotto.image import load_application_service
rougailconfig = RougailConfig
rougailconfig['variable_namespace'] = ROUGAIL_NAMESPACE
rougailconfig['variable_namespace_description'] = ROUGAIL_NAMESPACE_DESCRIPTION
DEFAULT_TYPE = 'string'
ROUGAIL_VARIABLE_TYPE = 'https://forge.cloud.silique.fr/risotto/rougail/src/branch/main/doc/variable/README.md#le-type-de-la-variable'
def add_title_family(elts, dico):
for idx, elt in enumerate(elts):
description = elt.doc
if not idx:
description = description.capitalize()
space = idx + 3
title = '#' * space + f' {description} (*{elt.path}*)'
if title not in dico:
dico[title] = {'variables': [], 'help': '', 'type': ''}
if hasattr(elt, 'information') and hasattr(elt.information, 'help'):
dico[title]['help'] = elt.information.help
if hasattr(elt, 'suffixes') and elt.suffixes:
dico[title]['type'] = 'dynamic'
dico[title]['suffixes'] = elt.suffixes.path
if hasattr(elt, 'leadership') and elt.leadership:
dico[title]['type'] = 'leadership'
return title
def parse(applicationservice, elts, dico, providers_suppliers, hidden, objectspace):
elt = elts[-1]
first_variable = True
if not hidden:
hidden = hasattr(elt, 'properties') and ('hidden' in elt.properties or 'disabled' in elt.properties)
is_leadership = hasattr(elt, 'leadership') and elt.leadership
for children in vars(elt).values():
if isinstance(children, dict):
children = list(children.values())
if not isinstance(children, list):
continue
for idx, child in enumerate(children):
if isinstance(child, objectspace.property_) or \
not isinstance(child, RootRougailObject):
continue
if isinstance(child, objectspace.variable):
if not hidden and (not hasattr(child, 'properties') or ('hidden' not in child.properties and not 'disabled' in child.properties)):
if first_variable:
title = add_title_family(elts, dico)
first_variable = False
var_title = child.doc
if hasattr(child, 'properties') and 'mandatory' in child.properties:
var_title = '**' + var_title + '**'
var_path = child.xmlfiles[-1].split('/', 2)[-1]
if child.doc != child.name:
var_title += f' (*[{child.name}]({var_path})*)'
else:
var_title = f'*[{var_title}]({var_path})*'
if ((idx == 0 or not is_leadership) and child.multi is True) or (idx != 0 and is_leadership and child.multi == 'submulti'):
var_title += ' [+]'
values = {'description': var_title,
}
if hasattr(child, 'information') and hasattr(child.information, 'help'):
values['help'] = child.information.help
if child.type != DEFAULT_TYPE:
values['type'] = child.type
if hasattr(child, 'default'):
default = child.default
if isinstance(default, objectspace.value):
default = '<calculated>'
if isinstance(default, list):
default = '<br />'.join(default)
values['values'] = default
if hasattr(child, 'choice'):
values['choices'] = '<br />'.join([choice.name for choice in child.choice])
if hasattr(child, 'provider'):
provider = child.provider
values['provider'] = provider
if ':' not in provider:
providers_suppliers['providers'].setdefault(provider, []).append(applicationservice)
if hasattr(child, 'supplier'):
supplier = child.supplier
values['supplier'] = supplier
if ':' not in supplier:
providers_suppliers['suppliers'].setdefault(supplier, []).append(applicationservice)
dico[title]['variables'].append(values)
else:
if hasattr(child, 'provider'):
provider = child.provider
if ':' not in provider:
providers_suppliers['providers'].setdefault(provider, []).append(applicationservice)
if hasattr(child, 'supplier'):
supplier = child.supplier
if ':' not in supplier:
providers_suppliers['suppliers'].setdefault(supplier, []).append(applicationservice)
else:
parse(applicationservice, elts + [child], dico, providers_suppliers, hidden, objectspace)
def build_dependencies_tree(applicationservice, applicationservice_data, applicationservices_data, applicationservices_data_ext, space):
depends = []
if applicationservice_data['depends']:
if applicationservice in applicationservices_data:
app_data = applicationservices_data[applicationservice]
else:
for url, apps_data in applicationservices_data_ext.items():
if applicationservice in apps_data:
app_data = apps_data[applicationservice]
break
else:
raise Exception(f'cannot find applicationservice "{applicationservice}"')
for idx, depend in enumerate(app_data['depends']):
if depend in applicationservices_data:
url = '..'
ext = False
else:
for url, apps_data in applicationservices_data_ext.items():
if depend in apps_data:
break
else:
raise Exception(f'cannot find applicationservice "{applicationservice}"')
ext = True
subdepends = build_dependencies_tree(depend, applicationservice_data, applicationservices_data, applicationservices_data_ext, space + 2)
if not idx or subdepends:
title = '\n'
else:
title = ''
depend_desc = depend
if ext:
depend_desc += ' (in external dataset)'
title = ' ' * space + f'- [{depend_desc}]({url}/{depend}/README.md)'
depends.append(title)
depends.extend(subdepends)
return depends
def load_data(url, directory, applicationservices_data, global_data={}):
root_path = join(directory, 'seed')
applicationservices = listdir(root_path)
tmps = {}
for applicationservice in applicationservices:
as_dir = join(root_path, applicationservice)
if not isdir(as_dir):
continue
applicationservice_data = load_application_service(as_dir)
if not applicationservice_data.get('documentation', True):
continue
applicationservices_data[applicationservice] = {'description': applicationservice_data['description'],
'website': applicationservice_data.get('website'),
'as_dir': as_dir,
'depends': [],
'used_by': [],
}
if applicationservice in tmps:
for app in tmps.pop(applicationservice):
used_by = f'[{app}](../{app}/README.md)'
applicationservices_data[applicationservice]['used_by'].append(used_by)
if 'depends' in applicationservice_data:
for depend in applicationservice_data['depends']:
applicationservices_data[applicationservice]['depends'].append(depend)
if depend in applicationservices_data:
used_by = f'[{applicationservice}](../{applicationservice}/README.md)'
applicationservices_data[depend]['used_by'].append(used_by)
else:
tmps.setdefault(depend, []).append(applicationservice)
if tmps and global_data:
for depend, applications in tmps.items():
for app in applications:
used_by = f'[{app} (in external dataset)]({url}/{app}/README.md)'
global_data[depend]['used_by'].append(used_by)
def write_data(applicationservices_data, applicationservices_data_ext):
dico = {}
providers_suppliers = {'providers': {}, 'suppliers': {}}
for applicationservice, applicationservice_data in applicationservices_data.items():
as_dir = applicationservice_data['as_dir']
dirname = join(as_dir, 'dictionaries')
if isdir(dirname):
rougailconfig['dictionaries_dir'] = [dirname]
else:
rougailconfig['dictionaries_dir'] = []
dirname_extras = join(as_dir, 'extras')
extra_dictionaries = {}
if isdir(dirname_extras):
for extra in listdir(dirname_extras):
extra_dir = join(dirname_extras, extra)
if isdir(extra_dir):
extra_dictionaries.setdefault(extra, []).append(extra_dir)
if not isdir(dirname) and not extra_dictionaries:
continue
rougailconfig['extra_dictionaries'] = extra_dictionaries
converted = RougailConvert(rougailconfig, just_doc=True)
converted.load_dictionaries()
converted.annotate()
objectspace = converted.rougailobjspace
if hasattr(objectspace.space, 'variables'):
dico[applicationservice] = {}
for name, elt in objectspace.space.variables.items():
parse(applicationservice, [elt], dico[applicationservice], providers_suppliers, False, objectspace)
for applicationservice, applicationservice_data in applicationservices_data.items():
as_dir = applicationservice_data['as_dir']
with open(join(as_dir, 'README.md'), 'w') as as_fh:
as_fh.write(f'---\ngitea: none\ninclude_toc: true\n---\n\n')
as_fh.write(f'# {applicationservice}\n\n')
as_fh.write(f'## Description\n\n')
description = applicationservice_data['description'] + '.\n'
if applicationservice_data['website']:
description += f'\n[For more informations]({applicationservice_data["website"]})\n'
as_fh.write(description)
if applicationservice_data['depends']:
as_fh.write(f'\n## Dependances\n\n')
for depend in build_dependencies_tree(applicationservice, applicationservice_data, applicationservices_data, applicationservices_data_ext, 0):
as_fh.write(f'{depend}\n')
if applicationservice in dico and dico[applicationservice]:
as_fh.write('\n## Variables\n\n')
for title, data in dico[applicationservice].items():
as_fh.write(f'{title}\n')
if data['type'] == 'leadership':
as_fh.write('\nThis a family is a leadership.\n')
if data['type'] == 'dynamic':
as_fh.write(f'\nThis a dynamic family generated from the variable "{data["suffixes"]}".\n')
if data['help']:
as_fh.write(f'\n{data["help"]}\n')
keys = []
if data['variables']:
variables = data['variables']
for variable in variables:
for key in variable:
if key not in keys:
keys.append(key)
values = []
for variable in variables:
value = []
for key in keys:
if key in variable:
val = variable[key]
elif key == 'type':
val = DEFAULT_TYPE
else:
val = ''
if key == 'type':
val = f'[{val}]({ROUGAIL_VARIABLE_TYPE})'
value.append(val)
values.append(value)
as_fh.write('\n')
as_fh.write(tabulate(values, headers=[key.capitalize() for key in keys], tablefmt="github"))
as_fh.write('\n')
as_fh.write('\n')
# FIXME if not applicationservice_data['used_by']:
# FIXME as_fh.write('\n## Variables with dependencies\n\n')
as_fh.write('\n- [+]: variable is multiple\n- **bold**: variable is mandatory\n')
if applicationservice_data['used_by']:
as_fh.write('\n## Used by\n\n')
if len(applicationservice_data['used_by']) == 1:
link = applicationservice_data['used_by'][0]
as_fh.write(f'{link}\n')
else:
for link in applicationservice_data['used_by']:
as_fh.write(f'- {link}\n')
linked = []
for provider, provider_as in providers_suppliers['providers'].items():
if not applicationservice in provider_as:
continue
for supplier in providers_suppliers['suppliers'][provider]:
if supplier in linked:
continue
linked.append(supplier)
linked.sort()
if linked:
if len(linked) == 1:
as_fh.write('\n## Supplier\n\n')
as_fh.write(f'[{linked[0]}](../{linked[0]}/README.md)\n')
else:
as_fh.write('\n## Suppliers\n\n')
for supplier in linked:
as_fh.write(f'- [{supplier}](../{supplier}/README.md)\n')
linked = []
for supplier, supplier_as in providers_suppliers['suppliers'].items():
if not applicationservice in supplier_as:
continue
for provider in providers_suppliers['providers'][supplier]:
if provider in linked:
continue
linked.append(provider)
linked.sort()
if linked:
if len(linked) == 1:
as_fh.write('\n## Provider\n\n')
as_fh.write(f'[{linked[0]}](../{linked[0]}/README.md)\n')
else:
as_fh.write('\n## Providers\n\n')
for provider in linked:
as_fh.write(f'- [{provider}](../{provider}/README.md)\n')
as_fh.write(f'\n[All applications services for this dataset.](../README.md)\n')
with open('seed/README.md', 'w') as as_fh:
as_fh.write('# Application services\n\n')
applicationservices = {}
for applicationservice in applicationservices_data:
applicationservices.setdefault(applicationservice.split('-')[0], []).append(applicationservice)
applicationservice_categories = list(applicationservices.keys())
applicationservice_categories.sort()
for category in applicationservice_categories:
applicationservices_ = applicationservices[category]
if len(applicationservices_) == 1:
applicationservice = applicationservices_[0]
applicationservice_data = applicationservices_data[applicationservice]
as_fh.write(f'- [{applicationservice}]({applicationservice}/README.md): {applicationservice_data["description"]}\n')
else:
as_fh.write(f'- {category}:\n')
applicationservices_.sort()
for applicationservice in applicationservices_:
applicationservice_data = applicationservices_data[applicationservice]
as_fh.write(f' - [{applicationservice}]({applicationservice}/README.md): {applicationservice_data["description"]}\n')
providers = list(providers_suppliers['providers'].keys())
providers.sort()
if providers:
as_fh.write('\n# Providers and suppliers\n\n')
for provider in providers:
as_fh.write(f'- {provider}:\n')
if providers_suppliers['providers'][provider]:
if len(providers_suppliers['providers'][provider]) == 1:
applicationservice = providers_suppliers['providers'][provider][0]
as_fh.write(f' - Provider: [{applicationservice}]({applicationservice}/README.md)\n')
else:
as_fh.write(f' - Providers:\n')
for applicationservice in providers_suppliers['providers'][provider]:
as_fh.write(f' - [{applicationservice}]({applicationservice}/README.md)\n')
if providers_suppliers['suppliers']:
if len(providers_suppliers['suppliers'][provider]) == 1:
applicationservice = providers_suppliers['suppliers'][provider][0]
as_fh.write(f' - Supplier: [{applicationservice}]({applicationservice}/README.md)\n')
else:
as_fh.write(f' - Suppliers:\n')
for applicationservice in providers_suppliers['suppliers'][provider]:
as_fh.write(f' - [{applicationservice}]({applicationservice}/README.md)\n')
def main():
applicationservices_data = {}
load_data('..', '', applicationservices_data)
applicationservices_data_ext = {}
for arg in argv[1:]:
if '=' not in arg:
raise Exception(f'cannot parse argument "{arg}", should be dataset_path=url')
path, url = arg.split('=', 1)
if url in applicationservices_data_ext:
raise Exception(f'duplicate url "{url}" in arguments')
applicationservices_data_ext[url] = {}
load_data(url, path, applicationservices_data_ext[url], applicationservices_data)
write_data(applicationservices_data, applicationservices_data_ext)
if __name__ == '__main__':
main()

28
sbin/risotto_check_certificates Executable file
View file

@ -0,0 +1,28 @@
#!/usr/bin/env python3
from os import walk
from datetime import datetime
week_day = datetime.now().isocalendar().week
week_cert = f'certificate_{week_day}.crt'
for p, d, f in walk('pki/x509'):
if not d and not f:
print('empty dir, you can remove it: ', p)
if not f:
continue
if f == ['serial_number']:
continue
if not p.endswith('/ca') and not p.endswith('/server') and not p.endswith('/client'):
print('unknown directory: ', p)
continue
if week_cert in f:
continue
for ff in f:
if ff.startswith('certificate_') and ff.endswith('.crt'):
print(f'old certificat in: ', p)
break
else:
print('cannot find certificat in: ', p)

235
sbin/risotto_display Executable file
View file

@ -0,0 +1,235 @@
#!/usr/bin/env python3
from tabulate import tabulate
from argparse import ArgumentParser
from rougail.utils import normalize_family
from tiramisu.error import PropertiesOptionError
from risotto.machine import load, remove_cache, ROUGAIL_NAMESPACE
HIDE_SECRET = True
def list_to_string(lst):
if isinstance(lst, list):
return "\n".join([str(val) for val in lst])
return lst
def get_files_subelements(type_name, element, files_subelement, files_cols):
data = {}
if not element.option('activate').value.get():
return data
for subelement in files_subelement.values():
if subelement['type'] == 'subelement':
try:
value = list_to_string(element.option(subelement['key']).value.get())
# FIXME except AttributeError:
except Exception:
value = ''
elif subelement['type'] == 'information':
value = element.information.get(subelement['key'], '')
elif subelement['type'] == 'none':
value = subelement['value']
else:
raise Exception('unknown subelement')
if value != '':
files_cols.add(subelement['key'])
data[subelement['key']] = value
if type_name == 'overrides':
data['name'] = f'/systemd/system/{data["source"]}.d/rougail.conf'
if not data['engine']:
data['engine'] = 'none'
elif not data['engine']:
data['engine'] = 'cheetah'
return data
def services(config, values):
files_subelement = {'Source': {'key': 'source', 'type': 'information'},
'Nom': {'key': 'name', 'type': 'subelement'},
'Variable': {'key': 'variable', 'type': 'subelement'},
'Propriétaire': {'key': 'owner', 'type': 'subelement'},
'Groupe': {'key': 'group', 'type': 'subelement'},
'Mode': {'key': 'mode', 'type': 'subelement'},
'Moteur': {'key': 'engine', 'type': 'information'},
}
disabled_services = []
for service in config.option.list(type="all"):
doc = service.option.doc()
files_lst = []
files_cols = set()
if not service.option('manage').value.get():
doc += " - unmanaged"
if not service.option('activate').value.get():
disabled_services.append([doc])
else:
for type in service.list(type="all"):
type_name = type.option.doc()
if type_name in ['files', 'overrides']:
for element in type.list(type="all"):
data = get_files_subelements(type_name, element, files_subelement, files_cols)
if data:
files_lst.append(data)
elif type_name == 'manage':
pass
elif type_name == 'activate':
if not type.value.get():
doc += " - unactivated"
else:
print("FIXME " + type_name)
if files_lst:
keys = [key for key, val in files_subelement.items() if val['key'] in files_cols]
values[doc] = {'keys': keys, 'lst': []}
for lst in files_lst:
values[doc]['lst'].append([val for key, val in lst.items() if key in files_cols])
if disabled_services:
values["Services désactivés"] = {'keys': ['Nom'], 'lst': disabled_services}
def table_leader(config, read_only):
keys = ['Description']
if read_only:
keys.append('Cachée')
leadership_lst = config.list(type="all")
leader = leadership_lst.pop(0)
leader_owner = leader.owner.get()
follower_names = [follower.option.name() for follower in leadership_lst]
doc = leader.option.doc()
properties = leader.property.get()
if 'mandatory' in properties:
doc += '*'
name = leader.option.name()
lst = [[f'{doc} ({name})']]
if read_only:
if 'hidden' in properties:
hidden = 'oui'
else:
hidden = ''
lst[0].append(hidden)
for idx, leader_value in enumerate(leader.value.get()):
keys.append(f'Valeur {idx}')
keys.append(f'Utilisateur {idx}')
lst[0].append(leader_value)
lst[0].append(leader_owner)
for follower_idx, follower_name in enumerate(follower_names):
follower_option = config.option(follower_name, idx)
if idx == 0:
doc = follower_option.option.doc()
properties = follower_option.property.get()
if 'mandatory' in properties:
doc += '*'
name = follower_option.option.name()
lst.append([f'{doc} ({name})'])
if read_only:
if 'hidden' in properties:
hidden = 'oui'
else:
hidden = ''
lst[-1].append(hidden)
try:
lst[follower_idx + 1].append(list_to_string(follower_option.value.get()))
lst[follower_idx + 1].append(follower_option.owner.get())
except PropertiesOptionError:
pass
# leader = next leader_iter
# if master_values is None:
# master_values = subconfig.value.get()
return {'keys': keys, 'lst': lst}
def table(config, prefix_len, values, read_only):
lst = []
for subconfig in config.option.list(type="all"):
# prefix = prefix_len * 2 * ' '
# if subconfig.option.isoptiondescription():
# prefix += '=>'
# else:
# prefix += '-'
# display_str = f'{prefix} {description}'
# if name != description:
# display_str = f'{display_str} ({name})'
name = subconfig.option.name()
doc = subconfig.option.doc()
if prefix_len == 0 and ROUGAIL_NAMESPACE != name:
doc = doc.capitalize()
if prefix_len == 0 and name == 'services':
values['Services'] = {}
services(subconfig, values['Services'])
elif subconfig.option.isoptiondescription():
od_name = f'{doc} ({(subconfig.option.path()).split(".", 1)[1]})'
values[od_name] = None
if subconfig.option.isleadership():
values[od_name] = table_leader(subconfig, read_only)
else:
values[od_name] = table(subconfig, prefix_len + 1, values, read_only)
else:
value = list_to_string(subconfig.value.get())
doc = subconfig.option.doc()
properties = subconfig.property.get()
if 'mandatory' in properties:
doc += '*'
name = subconfig.option.name()
lst.append([f'{doc} ({name})', value])
if read_only:
if 'hidden' in properties:
hidden = 'oui'
else:
hidden = ''
lst[-1].append(hidden)
lst[-1].append(subconfig.owner.get())
keys = ['Description', 'Valeur']
if read_only:
keys.append('Cachée')
keys.append('Utilisateur')
return {'keys': keys, 'lst': lst}
def main():
parser = ArgumentParser()
parser.add_argument('server_name')
parser.add_argument('--read_only', action='store_true')
parser.add_argument('--nocache', action='store_true')
parser.add_argument('--debug', action='store_true')
args = parser.parse_args()
if args.nocache:
remove_cache()
values = {}
server_name = args.server_name
config = load(hide_secret=HIDE_SECRET,
original_display_name=True,
valid_mandatories=args.read_only,
)
if not args.read_only:
config.property.read_write()
root_option = config.option(normalize_family(server_name))
try:
root_option.option.get()
except AttributeError:
exit(f'Unable to find {server_name} configuration: {[o.option.description() for o in config.option.list(type="optiondescription")]}')
table(root_option, 0, values, args.read_only)
for title, dico in values.items():
if title == 'Services':
if not dico:
continue
print()
print(title)
print('=' * len(title))
print()
for subtitle, dic in dico.items():
print()
print(' ' + subtitle)
print(' ' + '-' * len(subtitle))
print()
print(tabulate(dic['lst'], headers=dic['keys'], tablefmt="fancy_grid"))
elif dico['lst']:
print()
print(title)
print('=' * len(title))
print()
print(tabulate(dico['lst'], headers=dico['keys'], tablefmt="fancy_grid"))
main()

34
sbin/risotto_templates Executable file
View file

@ -0,0 +1,34 @@
#!/usr/bin/env python3
from argparse import ArgumentParser
from traceback import print_exc
from risotto.machine import remove_cache, build_files, INSTALL_DIR
def main():
parser = ArgumentParser()
parser.add_argument('server_name', nargs='?')
parser.add_argument('--nocache', action='store_true')
parser.add_argument('--debug', action='store_true')
parser.add_argument('--copy_tests', action='store_true')
parser.add_argument('--template')
args = parser.parse_args()
if args.nocache:
remove_cache()
try:
build_files(None,
args.server_name,
False,
args.copy_tests,
template=args.template,
)
except Exception as err:
if args.debug:
print_exc()
exit(err)
print(f'templates generated in "{INSTALL_DIR}" directory')
main()

273
src/risotto/image.py Normal file
View file

@ -0,0 +1,273 @@
from shutil import copy2, copytree
from os import listdir, makedirs
from os.path import join, isdir, isfile, dirname
from yaml import load as yaml_load, SafeLoader
from tiramisu.error import PropertiesOptionError
from .rougail import func
#
from .utils import RISOTTO_CONFIG
class ModuleCfg():
def __init__(self, module_name):
self.module_name = module_name
self.dictionaries_dir = []
self.functions_file = [func.__file__]
self.templates_dir = []
self.patches_dir = []
self.extra_dictionaries = {}
self.servers = []
self.depends = []
self.manuals = []
self.tests = []
#self.providers = []
#self.suppliers = []
def __repr__(self):
return str(vars(self))
def load_application_service(as_dir: str) -> str:
with open(join(as_dir, 'applicationservice.yml')) as yaml:
return yaml_load(yaml, Loader=SafeLoader)
class Applications:
def __init__(self) -> None:
self.datasets = RISOTTO_CONFIG.get('directories', {}).get('datasets', ['dataset'])
self.application_directories = self._load_application_directories()
def _load_application_directories(self) -> dict:
"""List all service applications in datasets
Returns something link:
{<applicationservice>: seed/<applicationservice>}
"""
applications = {'host': None}
for dataset_directory in self.datasets:
for applicationservice in listdir(dataset_directory):
applicationservice_dir = join(dataset_directory, applicationservice)
if not isdir(applicationservice_dir) or \
not isfile(join(applicationservice_dir, 'applicationservice.yml')):
continue
if applicationservice in applications:
raise Exception(f'multi applicationservice: {applicationservice} ({applicationservice_dir} <=> {applications[applicationservice]})')
applications[applicationservice] = applicationservice_dir
return applications
class Modules:
"""Modules are defined by the end user
A module is the a list of service applications
The class collects all the useful information for the module
"""
def __init__(self,
applicationservices: Applications,
applicationservice_provider: str,
modules_name: list,
host_applicationsservice: str,
) -> None:
self.application_directories = applicationservices.application_directories
self.module_infos = {}
self.module_infos['host'] = self._load_module_informations('host',
['host', host_applicationsservice],
is_host=True,
)
for module_name in modules_name:
if module_name == 'host':
raise Exception('forbidden module name: "host"')
self.module_infos[module_name] = self._load_module_informations(module_name,
[applicationservice_provider, module_name],
is_host=False,
)
def get(self,
module_name: str,
) -> ModuleCfg:
return self.module_infos[module_name]
def _load_module_informations(self,
module_name: str,
applicationservices: list,
is_host: bool,
) -> ModuleCfg:
"""Create a ModuleCfg object and collect informations
A module must depend to an unique distribution
"""
cfg = ModuleCfg(module_name)
distribution = None
for applicationservice in applicationservices:
ret = self._load_applicationservice(applicationservice,
cfg,
)
if ret:
if distribution:
raise Exception(f'duplicate distribution for {cfg.module_name}: {distribution} and {ret} (dependencies: {cfg.depends}) ')
distribution = ret
if not is_host and not distribution:
raise Exception(f'cannot found any linux distribution for {module_name}')
return cfg
def _load_applicationservice(self,
appname: str,
cfg: ModuleCfg,
) -> str:
"""extract informations from an application service and load it's dependency
informations collected is store to the module
returns the name of current distribution, if found
"""
if appname not in self.application_directories:
raise Exception(f'cannot find application dependency "{appname}"')
cfg.depends.append(appname)
as_dir = self.application_directories[appname]
if not as_dir:
return
self._load_applicationservice_directories(as_dir,
cfg,
)
app = load_application_service(as_dir)
provider = app.get('provider')
if provider:
cfg.providers.setdefault(provider, [])
if appname not in cfg.providers[provider]:
cfg.providers[provider].append(appname)
supplier = app.get('supplier')
#if supplier:
# self.suppliers.setdefault(supplier, [])
# if appname not in self.suppliers[supplier]:
# self.suppliers[supplier].append(appname)
if 'distribution' in app and app['distribution']:
distribution = appname
else:
distribution = None
for depend in app.get('depends', []):
if depend in cfg.depends:
#this dependancy is already loaded for this module
continue
ret = self._load_applicationservice(depend,
cfg,
)
if ret:
if distribution:
raise Exception(f'duplicate distribution for {cfg.module_name}: {distribution} and {ret} (dependencies: {cfg.depends}) ')
distribution = ret
return distribution
def _load_applicationservice_directories(self,
as_dir: str,
cfg: ModuleCfg,
) -> None:
# dictionaries
dictionaries_dir = join(as_dir, 'dictionaries')
if isdir(dictionaries_dir):
cfg.dictionaries_dir.append(dictionaries_dir)
# funcs
funcs_dir = join(as_dir, 'funcs')
if isdir(funcs_dir):
for f in listdir(funcs_dir):
if f.startswith('__'):
continue
cfg.functions_file.append(join(funcs_dir, f))
# templates
templates_dir = join(as_dir, 'templates')
if isdir(templates_dir):
cfg.templates_dir.append(templates_dir)
# patches
patches_dir = join(as_dir, 'patches')
if isdir(patches_dir):
cfg.patches_dir.append(patches_dir)
# extras
extras_dir = join(as_dir, 'extras')
if isdir(extras_dir):
for extra in listdir(extras_dir):
extra_dir = join(extras_dir, extra)
if isdir(extra_dir):
cfg.extra_dictionaries.setdefault(extra, []).append(extra_dir)
# manual
manual_dir = join(as_dir, 'manual', 'image')
if isdir(manual_dir):
cfg.manuals.append(manual_dir)
# tests
tests_dir = join(as_dir, 'tests')
if isdir(tests_dir):
cfg.tests.append(tests_dir)
def applicationservice_copy(src_file: str,
dst_file: str,
) -> None:
if isdir(src_file):
if not isdir(dst_file):
makedirs(dst_file)
for subfilename in listdir(src_file):
#if not copy_if_not_exists or not isfile(dst_file):
src = join(src_file, subfilename)
dst = join(dst_file, subfilename)
if isfile(src):
copy2(src, dst)
else:
copytree(src, dst)
else:
dst = dirname(dst_file)
if not isdir(dst):
makedirs(dst)
if isfile(src_file):
copy2(src_file, dst_file)
else:
copytree(src_file, dst_file)
def valid_mandatories(config):
mandatories = config.value.mandatory()
config.property.remove('mandatory')
hidden = {}
variables = {}
title = None
if mandatories:
server_name = None
for mandatory_option in mandatories:
path_server_name, path = mandatory_option.path().split('.', 1)
var_server_name = config.option(path_server_name).description()
if server_name != var_server_name:
server_name = var_server_name
title = f'=== Missing variables for "{server_name.split(".", 1)[0]}" ==='
text = mandatory_option.doc()
msg = f' - {text} ({path})'
supplier = mandatory_option.information.get('supplier', None)
if supplier and not ':' in supplier:
msg += f' you should add a service that provides "{supplier}"'
if mandatory_option.isfollower():
leader = mandatory_option.leader()
try:
leader_value = leader.value.get()
except PropertiesOptionError as err:
if 'hidden' not in err.proptype:
raise err from err
hidden.setdefault(title, []).append(msg)
else:
config.property.add('mandatory')
for idx in range(mandatory_option.value.len()):
try:
config.option(mandatory_option.path(), idx).value.get()
except PropertiesOptionError as err:
path = leader.path()
spath = path.split('.', 1)[1]
submsg = f'{msg} at index {idx} (value of leader "{leader.doc()}" ({spath}) is "{leader_value[idx]}")'
if 'hidden' in err.proptype:
hidden.setdefault(title, []).append(submsg)
elif 'mandatory' in err.proptype:
variables.setdefault(title, []).append(submsg)
else:
raise err from err
config.property.remove('mandatory')
else:
try:
mandatory_option.value.get()
variables.setdefault(title, []).append(msg)
except PropertiesOptionError as err:
if 'hidden' not in err.proptype:
raise err from err
hidden.setdefault(title, []).append(msg)
if not variables:
variables = hidden
return variables

601
src/risotto/machine.py Normal file
View file

@ -0,0 +1,601 @@
from .utils import MULTI_FUNCTIONS, load_zones, value_pprint, RISOTTO_CONFIG, EXTRA_ANNOTATORS, ROUGAIL_NAMESPACE, ROUGAIL_NAMESPACE_DESCRIPTION
from .image import Applications, Modules, valid_mandatories, applicationservice_copy
from rougail import RougailConfig, RougailConvert
from os import remove, makedirs, listdir, chmod
from os.path import isfile, isdir, abspath, join, dirname
from pickle import dump as pickle_dump, load as pickle_load
from yaml import load as yaml_load, SafeLoader
from ipaddress import IPv4Interface, ip_network
#
from tiramisu import Config
from rougail.utils import normalize_family
from rougail import RougailSystemdTemplate
from shutil import copy2, copytree, rmtree
def tiramisu_display_name(kls,
dyn_name: 'Base'=None,
suffix: str=None,
) -> str:
# FIXME
if dyn_name is not None:
name = kls.impl_getpath() + str(suffix)
else:
name = kls.impl_getpath()
return name
CONFIG_FILE = 'servers.yml'
TIRAMISU_CACHE = 'tiramisu_cache.py'
VALUES_CACHE = 'values_cache.pickle'
INFORMATIONS_CACHE = 'informations_cache.pickle'
INSTALL_DIR = RISOTTO_CONFIG['directories']['dest']
INSTALL_CONFIG_DIR = 'configurations'
INSTALL_TMPL_DIR= 'templates'
INSTALL_IMAGES_DIR = 'images_files'
INSTALL_TESTS_DIR = 'tests'
def copy(src_file, dst_file):
if isdir(src_file):
if not isdir(dst_file):
makedirs(dst_file)
for subfilename in listdir(src_file):
if not isfile(dst_file):
src = join(src_file, subfilename)
dst = join(dst_file, subfilename)
if isfile(src):
copy2(src, dst)
else:
copytree(src, dst)
elif not isfile(dst_file):
dst = dirname(dst_file)
if not isdir(dst):
makedirs(dst)
if isfile(src_file):
copy2(src_file, dst_file)
else:
copytree(src_file, dst_file)
def re_create(dir_name):
if isdir(dir_name):
rmtree(dir_name)
makedirs(dir_name)
def remove_cache():
if isfile(TIRAMISU_CACHE):
remove(TIRAMISU_CACHE)
if isfile(VALUES_CACHE):
remove(VALUES_CACHE)
if isfile(INFORMATIONS_CACHE):
remove(INFORMATIONS_CACHE)
def templates(server_name,
config,
just_copy=False,
copy_manuals=False,
template=None,
extra_variables=None,
):
subconfig = config.option(normalize_family(server_name))
try:
subconfig.get()
except:
servers = [server.description() for server in config.list('optiondescription')]
raise Exception(f'cannot find server name "{server_name}": {servers}')
rougailconfig = RougailConfig.copy()
rougailconfig['variable_namespace'] = ROUGAIL_NAMESPACE
rougailconfig['variable_namespace_description'] = ROUGAIL_NAMESPACE_DESCRIPTION
rougailconfig['tmp_dir'] = 'tmp'
rougailconfig['templates_dir'] = subconfig.information.get('templates_dir')
rougailconfig['patches_dir'] = subconfig.information.get('patches_dir')
rougailconfig['functions_file'] = subconfig.information.get('functions_files')
module = subconfig.information.get('module')
is_host = module == 'host'
if is_host:
rougailconfig['systemd_tmpfile_delete_before_create'] = True
if just_copy:
raise Exception('cannot generate template with option just_copy for a host')
else:
rougailconfig['systemd_tmpfile_delete_before_create'] = False
#rougailconfig['systemd_tmpfile_factory_dir'] = '/usr/local/lib'
if not just_copy:
rougailconfig['destinations_dir'] = join(INSTALL_DIR, INSTALL_CONFIG_DIR, server_name)
else:
rougailconfig['destinations_dir'] = join(INSTALL_DIR, INSTALL_TMPL_DIR, server_name)
re_create(rougailconfig['destinations_dir'])
re_create(rougailconfig['tmp_dir'])
engine = RougailSystemdTemplate(subconfig,
rougailconfig,
)
if just_copy:
# for all engine to none
ori_engines = {}
for eng in engine.engines:
if eng == 'none':
continue
ori_engines[eng] = engine.engines[eng]
engine.engines[eng] = engine.engines['none']
try:
if not template:
engine.instance_files(extra_variables=extra_variables)
else:
engine.instance_file(template, extra_variables=extra_variables)
except Exception as err:
print()
print(f'=== Configuration: {server_name} ===')
try:
values = subconfig.value.dict()
value_pprint(values, subconfig)
except:
pass
raise err from err
if just_copy:
for eng, old_engine in ori_engines.items():
engine.engines[eng] = old_engine
secrets_dir = join(rougailconfig['destinations_dir'], 'secrets')
if isdir(secrets_dir):
chmod(secrets_dir, 0o700)
if copy_manuals and not is_host:
dest_dir = join(INSTALL_DIR, INSTALL_IMAGES_DIR, module)
if not isdir(dest_dir):
for manual in subconfig.information.get('manuals_dirs'):
for filename in listdir(manual):
src_file = join(manual, filename)
dst_file = join(dest_dir, filename)
copy(src_file, dst_file)
copy_tests = config.information.get('copy_tests')
if copy_tests and not is_host:
dest_dir = join(INSTALL_DIR, INSTALL_TESTS_DIR, module)
if not isdir(dest_dir):
for tests in subconfig.information.get('tests_dirs'):
for filename in listdir(tests):
src_file = join(tests, filename)
dst_file = join(dest_dir, filename)
copy(src_file, dst_file)
class Loader:
def __init__(self,
hide_secret,
original_display_name,
valid_mandatories,
config_file=CONFIG_FILE,
):
self.hide_secret = hide_secret
self.original_display_name = original_display_name
self.valid_mandatories = valid_mandatories
self.config_file = config_file
def load_tiramisu_file(self):
"""Load config file (servers.yml) and build tiramisu file with dataset informations
"""
with open(self.config_file, 'r') as server_fh:
self.servers_json = yaml_load(server_fh, Loader=SafeLoader)
self.add_tls()
# set global rougail configuration
cfg = RougailConfig.copy()
cfg['variable_namespace'] = ROUGAIL_NAMESPACE
cfg['variable_namespace_description'] = ROUGAIL_NAMESPACE_DESCRIPTION
cfg['multi_functions'] = MULTI_FUNCTIONS
cfg['extra_annotators'] = EXTRA_ANNOTATORS
cfg['force_convert_dyn_option_description'] = True
cfg['risotto_globals'] = {}
# initialise variables to store useful informations
# those variables are use during templating
self.templates_dir = {}
self.patches_dir = {}
self.functions_files = {}
self.manuals_dirs = {}
self.tests_dirs = {}
self.modules = {}
functions_files = set()
applicationservices = Applications()
zones_name = {}
rougail = RougailConvert(cfg)
for host_name, datas in self.servers_json['hosts'].items():
for server_name, server_datas in datas['servers'].items():
if 'provider_zone' not in server_datas and 'zones_name' not in server_datas:
raise Exception(f'cannot find "zones_name" attribute for server "{server_name}"')
if 'provider_zone' in server_datas:
zones_name.setdefault(server_datas['provider_zone'], []).append(server_name)
if 'zones_name' not in server_datas:
server_datas['zones_name'] = []
if server_datas['provider_zone'] in server_datas['zones_name']:
raise Exception(_('provider_zone "{server_datas["provider_zone"]}" must not be in "zones" "{server_datas["zones_name"]}"'))
# external zone is better in first place
if server_datas['zones_name'] and self.servers_json['zones']['external_zone'] == server_datas['zones_name'][0]:
server_datas['zones_name'].append(server_datas['provider_zone'])
else:
server_datas['zones_name'].insert(0, server_datas['provider_zone'])
# if server_datas['zones_name'] and server_datas['provider_zone'] == self.servers_json['zones']['external_zone']:
# server_datas['zones_name'].insert(0, server_datas['provider_zone'])
# else:
# server_datas['zones_name'].append(server_datas['provider_zone'])
for zone in server_datas['zones_name']:
zones_name.setdefault(zone, []).append(server_name)
self.zones = {}
zones_network = ip_network(self.servers_json['zones']['network'])
zone_start_ip = zones_network.network_address
domain_name = self.servers_json['zones']['prefix_domain_name']
for zone_name in zones_name:
len_zone = len(zones_name[zone_name])
for zone_cidr in [29, 28, 27, 26]:
try:
sub_network = ip_network(f'{zone_start_ip}/{zone_cidr}')
except ValueError:
# calc network address for this mask
zone_start_ip = IPv4Interface(f'{zone_start_ip}/{zone_cidr}').network.broadcast_address + 1
sub_network = ip_network(f'{zone_start_ip}/{zone_cidr}')
if not sub_network.subnet_of(zones_network):
raise Exception('not enough IP available')
length = sub_network.num_addresses - 3 # network + broadcast + host
if length >= len_zone:
break
else:
raise Exception(f'network too small for zone "{zone_name}" ({sub_network.num_addresses - 2} < {len_zone})')
if self.servers_json['zones']['external_zone'] == zone_name:
zone_domaine_name = domain_name
else:
zone_domaine_name = zone_name + '.' + domain_name
network = sub_network.network_address
self.zones[zone_name] = {'domain_name': zone_domaine_name,
'network': str(sub_network),
'host_ip': str(network + 1),
'host_name': host_name.split('.', 1)[0],
'length': length,
'start_ip': str(network + 2)
}
zone_start_ip = str(sub_network.broadcast_address + 1)
for host_name, datas in self.servers_json['hosts'].items():
# load modules associate to this host
modules_name = set()
for name, mod_datas in datas['servers'].items():
if not 'applicationservice' in mod_datas:
raise Exception(f'applicationservice is mandatory for "{name}"')
modules_name.add(mod_datas['applicationservice'])
# load modules informations from config files
modules = Modules(applicationservices,
datas['applicationservice_provider'],
modules_name,
datas['applicationservice'],
)
# load host
module_info = modules.get('host')
tls_host_name = f'{server_name}.{self.zones[list(self.zones)[0]]["domain_name"]}'
short_host_name = host_name.split('.', 1)[0]
values = [f'{short_host_name}.{self.zones[zone_name]["domain_name"]}' for zone_name in zones_name]
cfg['risotto_globals'][host_name] = {'global:server_name': host_name,
'global:server_names': values,
'global:zones_name': list(self.zones),
'global:module_name': 'host',
'global:host_install_dir': abspath(INSTALL_DIR),
'global:tls_server': tls_host_name,
}
functions_files |= set(module_info.functions_file)
self.load_dictionaries(cfg,
module_info,
host_name,
rougail,
)
# load servers
modules_info = {}
for server_name, server_datas in datas['servers'].items():
module_info = modules.get(server_datas['applicationservice'])
zones_name = server_datas['zones_name']
values = [f'{server_name}.{self.zones[zone_name]["domain_name"]}' for zone_name in zones_name]
if server_datas['applicationservice'] == 'tls':
true_host_name = f'{server_name}.{self.zones[server_datas["zones_name"][0]]["domain_name"]}'
else:
true_host_name = values[0]
cfg['risotto_globals'][true_host_name] = {'global:host_name': host_name,
'global:server_name': true_host_name,
'global:server_names': values,
'global:zones_name': zones_name,
'global:zones_list': list(range(len(zones_name))),
'global:module_name': server_datas['applicationservice'],
'global:prefix_domain_name': self.servers_json['zones']['prefix_domain_name']
}
if 'provider_zone' in server_datas:
cfg['risotto_globals'][true_host_name]['global:provider_zone'] = server_datas['provider_zone']
server_datas['server_name'] = true_host_name
functions_files |= set(module_info.functions_file)
self.load_dictionaries(cfg,
module_info,
true_host_name,
rougail,
)
modules_info[module_info.module_name] = module_info.depends
self.modules[host_name] = modules_info
cfg['functions_file'] = list(functions_files)
self.tiram_obj = rougail.save(TIRAMISU_CACHE)
with open(TIRAMISU_CACHE, 'a') as cache:
cache.write(f"""#from pickle import load
#config = Config(option_0)
#config.property.read_only()
#with open('{VALUES_CACHE}', 'rb') as fh:
# config.value.importation(load(fh))
#with open('{INFORMATIONS_CACHE}', 'rb') as fh:
# config.information.importation(load(fh))
#print(config.value.mandatory())
""")
def add_tls(self):
dns_module_name = None
for host in self.servers_json['hosts'].values():
zones = [self.servers_json['zones']['external_zone'], None]
for server_name, datas in host['servers'].items():
if not 'applicationservice' in datas:
raise Exception(f'cannot find applicationservice for "{server_name}"')
if datas['applicationservice'] == 'tls':
raise Exception(f'forbidden module name "tls" for server "{server_name}"')
#FIXME use provider!
if datas['applicationservice'] == 'nginx-reverse-proxy' and len(datas['zones_name']) > 0:
if dns_module_name:
break
zones[1] = datas['provider_zone']
if zones[0] == zones[1] or not zones[1]:
zones.pop(1)
host['servers']['tls'] = {'applicationservice': 'tls',
'zones_name': zones,
}
def load_dictionaries(self, cfg, module_info, server_name, rougail):
if not module_info.dictionaries_dir:
raise Exception(f'server "{server_name}" has any dictionaries!')
cfg['dictionaries_dir'] = module_info.dictionaries_dir
cfg['extra_dictionaries'] = module_info.extra_dictionaries
cfg['functions_file'] = module_info.functions_file
rougail.load_dictionaries(path_prefix=server_name)
self.templates_dir[server_name] = module_info.templates_dir
self.patches_dir[server_name] = module_info.patches_dir
self.functions_files[server_name] = module_info.functions_file
self.manuals_dirs[server_name] = module_info.manuals
self.tests_dirs[server_name] = module_info.tests
def tiramisu_file_to_tiramisu(self):
# l
tiramisu_space = {}
try:
exec(self.tiram_obj, None, tiramisu_space)
except Exception as err:
raise Exception(f'unknown error when load tiramisu object: "{err}" see the file "{TIRAMISU_CACHE}" for more details') from err
if self.original_display_name:
display_name = None
else:
display_name = tiramisu_display_name
self.config = Config(tiramisu_space['option_0'],
display_name=display_name,
)
def load_values_and_informations(self):
config = self.config
config.property.read_write()
config.property.remove('validator')
config.property.remove('cache')
load_zones(self.zones, self.servers_json['hosts'])
config.information.set('zones', self.zones)
for host_name, hosts_datas in self.servers_json['hosts'].items():
information = config.option(normalize_family(host_name)).information
information.set('module', 'host')
information.set('templates_dir', self.templates_dir[host_name])
information.set('patches_dir', self.patches_dir[host_name])
information.set('functions_files', self.functions_files[host_name])
self.set_values(host_name, config, hosts_datas)
for datas in hosts_datas['servers'].values():
server_name = datas['server_name']
information = config.option(normalize_family(server_name)).information
information.set('module', datas['applicationservice'])
information.set('templates_dir', self.templates_dir[server_name])
information.set('patches_dir', self.patches_dir[server_name])
information.set('functions_files', self.functions_files[server_name])
information.set('manuals_dirs', self.manuals_dirs[server_name])
information.set('tests_dirs', self.tests_dirs[server_name])
self.set_values(server_name, config, datas)
config.information.set('copy_tests', False)
# FIXME only one host_name is supported
config.information.set('modules', self.modules[host_name])
# config.information.set('modules', {module_name: module_info.depends for module_name, module_info in self.module_infos.items() if module_name in modules})
with open(VALUES_CACHE, 'wb') as fh:
pickle_dump(config.value.exportation(), fh)
with open(INFORMATIONS_CACHE, 'wb') as fh:
pickle_dump(config.information.exportation(), fh)
config.property.add('cache')
if self.valid_mandatories:
messages = valid_mandatories(config)
if messages:
msg = ''
for title, variables in messages.items():
msg += '\n' + title + '\n'
msg += '\n'.join(variables)
raise Exception(msg)
config.property.read_only()
def set_values(self,
server_name,
config,
datas,
):
if 'values' not in datas:
return
if not isinstance(datas['values'], dict):
raise Exception(f'Values of "{server_name}" are not a dict: {datas["values"]}')
server_path = normalize_family(server_name)
config.owner.set(self.config_file)
for vpath, value in datas['values'].items():
path = f'{server_path}.{vpath}'
try:
if isinstance(value, dict):
for idx, val in value.items():
config.option(path, int(idx)).value.set(val)
else:
config.option(path).value.set(value)
except Exception as err:
value_pprint(config.value.dict(), config)
error_msg = f'cannot configure variable {vpath} for server "{server_name}": {err}'
raise Exception(error_msg) from err
config.owner.set('user')
class LoaderCache(Loader):
def load_tiramisu_file(self):
with open(TIRAMISU_CACHE) as fh:
self.tiram_obj = fh.read()
def load_values_and_informations(self):
with open(VALUES_CACHE, 'rb') as fh:
self.config.value.importation(pickle_load(fh))
with open(INFORMATIONS_CACHE, 'rb') as fh:
self.config.information.importation(pickle_load(fh))
def load(hide_secret=False,
original_display_name: bool=False,
valid_mandatories: bool=True,
copy_tests: bool=False,
):
if isfile(TIRAMISU_CACHE) and isfile(VALUES_CACHE) and isfile(INFORMATIONS_CACHE):
loader_obj = LoaderCache
else:
loader_obj = Loader
loader = loader_obj(hide_secret,
original_display_name,
valid_mandatories,
)
loader.load_tiramisu_file()
loader.tiramisu_file_to_tiramisu()
loader.load_values_and_informations()
config = loader.config
config.property.read_only()
config.information.set('copy_tests', copy_tests)
config.cache.reset()
return config
def build_files(hostname: str,
only_machine: str,
just_copy: bool,
copy_tests: bool,
template: str=None,
) -> None:
if isdir(INSTALL_DIR):
rmtree(INSTALL_DIR)
makedirs(INSTALL_DIR)
with open(CONFIG_FILE, 'r') as server_fh:
servers_json = yaml_load(server_fh, Loader=SafeLoader)
config = load(copy_tests=copy_tests)
machines = [subconfig.description() for subconfig in config.option.list(type='optiondescription')]
certificates = {'certificates': {},
'configuration': servers_json['certificates'],
}
# get certificates informations
tls_machine = config.option(f'{normalize_family(hostname)}.general.tls_server').value.get()
for machine in machines:
if machine == tls_machine:
continue
if hostname is None:
# FIXME multi host!
hostname = config.option(normalize_family(machine)).option('general.host_name').value.get()
if just_copy:
continue
is_host = machine == hostname
if is_host:
continue
machine_config = config.option(normalize_family(machine))
certificate_names = []
private_names = []
for service in machine_config.option('services').list('optiondescription'):
if not service.option('activate').value.get():
continue
# if service.option('manage').value.get():
# certificate_type = 'server'
# else:
# certificate_type = 'client'
tls_ca_directory = machine_config.option('general.tls_ca_directory').value.get()
tls_cert_directory = machine_config.option('general.tls_cert_directory').value.get()
tls_key_directory = machine_config.option('general.tls_key_directory').value.get()
try:
for certificate in service.option('certificates').list('all'):
if not certificate.option('activate').value.get():
continue
certificate_data = {key.rsplit('.', 1)[1]: value for key, value in certificate.value.dict().items()}
certificate_data['type'] = certificate.information.get('type')
certificate_data['authority'] = join(tls_ca_directory, certificate.information.get('authority') + '.crt')
certificate_data['format'] = certificate.information.get('format')
is_list_name = isinstance(certificate_data['name'], list)
is_list_domain = isinstance(certificate_data['domain'], list)
if is_list_name != is_list_domain:
raise Exception('certificate name and domain name must be a list together')
if 'provider' not in certificate_data:
certificate_data['provider'] = 'autosigne'
if is_list_name:
if len(certificate_data['name']) != len(certificate_data['domain']):
raise Exception('certificate name and domain name must have same length')
for idx, certificate_name in enumerate(certificate_data['name']):
cert_data = certificate_data.copy()
if certificate_data['format'] == 'cert_key':
cert_data['name'] = join(tls_cert_directory, certificate_name + '.crt')
private = join(tls_key_directory, certificate_name + '.key')
if private in private_names:
raise Exception(f'duplicate private key {private} for {machine}')
cert_data['private'] = private
private_names.append(private)
else:
cert_data['name'] = join(tls_key_directory, certificate_name + '.pem')
cert_data['domain'] = certificate_data['domain'][idx]
if cert_data['name'] in certificate_names:
raise Exception(f'duplicate certificate {cert_data["name"]} for {machine}')
certificates['certificates'].setdefault(machine, []).append(cert_data)
certificate_names.append(cert_data['name'])
else:
name = certificate_data['name']
if certificate_data['format'] == 'cert_key':
certificate_data['name'] = join(tls_cert_directory, name + '.crt')
private = join(tls_key_directory, name + '.key')
if private in private_names:
raise Exception(f'duplicate private key {private} for {machine}')
certificate_data['private'] = private
else:
certificate_data['name'] = join(tls_key_directory, name + '.pem')
if certificate_data['name'] in certificate_names:
raise Exception(f'duplicate certificate {certificate_data["name"]} for {machine}')
certificate_names.append(certificate_data['name'])
certificates['certificates'].setdefault(machine, []).append(certificate_data)
except AttributeError:
pass
directories = {}
for machine in machines:
if just_copy and hostname == machine:
continue
if only_machine and only_machine != machine:
continue
templates(machine,
config,
just_copy=just_copy,
copy_manuals=True,
template=template,
extra_variables=certificates,
)
is_host = machine == hostname
if is_host:
directories[machine] = '/usr/local/lib'
elif not just_copy:
machine_config = config.option(normalize_family(machine))
directories[machine] = machine_config.option('general.config_dir').value.get()
if only_machine:
return directories
if only_machine:
raise Exception(f'cannot find machine {only_machine}: {machines}')
return directories, certificates

View file

@ -1,5 +1,9 @@
from rougail.annotator.variable import Walk
from rougail.error import DictConsistencyError
from rougail.utils import normalize_family
from risotto.utils import _
from warnings import warn
from typing import List, Tuple
class Annotator(Walk):
@ -7,51 +11,394 @@ class Annotator(Walk):
def __init__(self,
objectspace: 'RougailObjSpace',
*args):
self.providers = {}
self.suppliers = {}
self.globals = {}
self.provider_links = {}
self.providers_zone = {}
self.provider_maps = {}
self.objectspace = objectspace
# self.convert_get_linked_information()
# self.convert_provider()
#
self.get_providers_suppliers()
self.get_provider_links()
self.get_provider_maps()
self.convert_providers()
self.convert_globals()
def convert_get_linked_information(self):
if not hasattr(self.objectspace.space, 'constraints') or \
not hasattr(self.objectspace.space.constraints, 'fill'):
return
for fill in self.objectspace.space.constraints.fill:
if fill.name == 'get_linked_configuration':
# add server_name
param = self.objectspace.param(fill.xmlfiles)
param.name = 'server_name'
param.type = 'information'
param.text = 'server_name'
fill.param.append(param)
# add current_user
param = self.objectspace.param(fill.xmlfiles)
param.name = 'current_user'
param.type = 'information'
param.text = 'current_user'
fill.param.append(param)
# add test
param = self.objectspace.param(fill.xmlfiles)
param.name = 'test'
param.type = 'target_information'
param.text = 'test'
fill.param.append(param)
def convert_provider(self):
if not hasattr(self.objectspace.space, 'variables'):
return
for family in self.get_families():
if not hasattr(family, 'provider'):
continue
if 'dynamic' not in vars(family):
raise Exception(_(f'{family.name} is not a dynamic family so cannot have provider attribute'))
if not hasattr(family, 'information'):
family.information = self.objectspace.information(family.xmlfiles)
family.information.provider = family.provider
del family.provider
def get_providers_suppliers(self) -> None:
"""parse all variable and get provider and supplier informations
"""
for variable in self.get_variables():
if not hasattr(variable, 'provider'):
continue
for type_ in 'provider', 'supplier':
if hasattr(variable, type_):
provider_name = getattr(variable, type_)
# add information with provider/supplier name
if not hasattr(variable, 'information'):
variable.information = self.objectspace.information(variable.xmlfiles)
variable.information.provider = variable.provider
del variable.provider
setattr(variable.information, type_, provider_name)
# construct self.globals, self.suppliers and self.providers dictionnaries
provider_prefix, provider_suffix = self._cut_out_provider_name(provider_name)
dns = self.objectspace.space.variables[variable.path_prefix].doc
if provider_prefix == 'global':
if type_ == 'supplier':
raise DictConsistencyError(f'{type_} {provider_name} in {dns} not allowed', 0, variable.xmlfiles)
obj = self.globals
elif type_ == 'supplier':
obj = self.suppliers
else:
obj = self.providers
sub_obj = obj.setdefault(provider_prefix, {}).setdefault(dns, {})
if provider_suffix in sub_obj:
raise DictConsistencyError(f'multiple {type_} {provider_name} in {dns}', 0, sub_obj[provider_suffix].xmlfiles + variable.xmlfiles)
sub_obj[provider_suffix] = variable
def _cut_out_provider_name(self,
provider_name: str,
) -> Tuple[str, str]:
"""get provider_name and return provider_prefix and provider_suffix
"""
if ':' in provider_name:
provider_prefix, provider_suffix = provider_name.split(':', 1)
else:
provider_prefix = provider_name
provider_suffix = None
return provider_prefix, provider_suffix
def get_provider_links(self):
"""Search link between providers
'ProviderPrefix': {'provider_dns': ['supplier_dns_1',
'supplier_dns_2',
'supplier_dns_3']}
"""
for provider_prefix, providers_dns in self.providers.items():
self.provider_links[provider_prefix] = {}
for provider_dns, providers_suffix in providers_dns.items():
if None not in providers_suffix:
# it's a reverse provider!
continue
if provider_prefix != 'Host':
provider_zone = self.objectspace.rougailconfig['risotto_globals'][provider_dns]['global:provider_zone']
if provider_prefix not in self.suppliers:
continue
for supplier_dns, suppliers_suffix in self.suppliers[provider_prefix].items():
if provider_dns == supplier_dns:
continue
if provider_prefix == 'Host':
provider_zone = self.objectspace.rougailconfig['risotto_globals'][supplier_dns]['global:zones_name'][0]
if provider_zone not in self.objectspace.rougailconfig['risotto_globals'][supplier_dns]['global:zones_name']:
continue
self.provider_links[provider_prefix].setdefault(provider_dns, []).append(supplier_dns)
def get_provider_maps(self):
"""relation between provider_prefix and provider_suffix
'provider_prefix': {'normal': {None, 'provider_suffix_1', 'provider_suffix_2'},
'reverse': {'provider_suffix_3', 'provider_suffix_4'}}
"""
for provider_prefix, providers_dns in self.provider_links.items():
self.provider_maps[provider_prefix] = {'normal': set(), 'reverse': set()}
for provider_dns, suppliers_dns in providers_dns.items():
for supplier_dns in suppliers_dns:
if supplier_dns in self.providers[provider_prefix] and None in self.providers[provider_prefix][supplier_dns]:
#exists?
continue
# get prefixes
prefixes = set(self.providers[provider_prefix][provider_dns]) & set(self.suppliers[provider_prefix][supplier_dns])
self.provider_maps[provider_prefix]['normal'] |= prefixes
# get suffixes
if supplier_dns not in self.providers[provider_prefix]:
continue
suffixes = set(self.providers[provider_prefix][supplier_dns]) & set(self.suppliers[provider_prefix][provider_dns])
self.provider_maps[provider_prefix]['reverse'] |= suffixes
def convert_providers(self) -> None:
"""Convert providers informations to default values or fills
"""
for provider_prefix, providers_dns in self.provider_links.items():
for provider_dns, suppliers_dns in providers_dns.items():
for provider_suffix in self.provider_maps[provider_prefix]['normal']:
self._convert_providers_normal(provider_prefix,
provider_suffix,
provider_dns,
suppliers_dns,
)
for provider_suffix in self.provider_maps[provider_prefix]['reverse']:
self._convert_providers_reverse(provider_prefix,
provider_suffix,
provider_dns,
suppliers_dns,
)
def _convert_providers_normal(self,
provider_prefix: str,
provider_suffix: str,
provider_dns: str,
suppliers_dns: dict,
) -> None:
if provider_prefix != 'Host':
provider_zone = self.objectspace.rougailconfig['risotto_globals'][provider_dns]['global:provider_zone']
provider_option_dns = self._get_dns_from_provider_zone(provider_dns, provider_zone)
variable = self.providers[provider_prefix][provider_dns][provider_suffix]
if hasattr(variable, 'value'):
raise DictConsistencyError(f'variable {variable.path} has a provider and a value', 0, variable.xmlfiles)
suppliers_var = {}
for supplier_dns in suppliers_dns:
if provider_suffix not in self.suppliers[provider_prefix][supplier_dns]:
continue
if provider_prefix == 'Host':
provider_zone = self.objectspace.rougailconfig['risotto_globals'][supplier_dns]['global:zones_name'][0]
provider_option_dns = self._get_dns_from_provider_zone(provider_dns, provider_zone)
supplier_variable = self.suppliers[provider_prefix][supplier_dns][provider_suffix]
supplier_option_dns = self._get_dns_from_provider_zone(supplier_dns, provider_zone)
suppliers_var[supplier_option_dns] = supplier_variable
if provider_suffix:
AddFill(self.objectspace,
provider_prefix,
provider_suffix,
provider_option_dns,
variable,
suppliers_var,
False,
supplier_variable.path_prefix,
)
else:
self._set_provider_supplier(provider_option_dns,
variable,
suppliers_var,
)
def _convert_providers_reverse(self,
provider_prefix: str,
provider_suffix: str,
provider_dns: str,
suppliers_dns: dict,
) -> None:
if provider_prefix != 'Host':
provider_zone = self.objectspace.rougailconfig['risotto_globals'][provider_dns]['global:provider_zone']
provider_option_dns = self._get_dns_from_provider_zone(provider_dns, provider_zone)
variable = self.suppliers[provider_prefix][provider_dns][provider_suffix]
if hasattr(variable, 'value'):
raise DictConsistencyError(f'variable {variable.path} has a provider and a value', 0, variable.xmlfiles)
for supplier_dns in suppliers_dns:
if provider_prefix == 'Host':
provider_zone = self.objectspace.rougailconfig['risotto_globals'][supplier_dns]['global:zones_name'][0]
provider_option_dns = self._get_dns_from_provider_zone(provider_dns, provider_zone)
supplier_variable = self.providers[provider_prefix][supplier_dns][provider_suffix]
supplier_option_dns = self._get_dns_from_provider_zone(supplier_dns, provider_zone)
AddFill(self.objectspace,
provider_prefix,
provider_suffix,
supplier_option_dns,
supplier_variable,
{provider_option_dns: variable},
True,
supplier_variable.path_prefix,
)
def _get_dns_from_provider_zone(self,
dns,
zone,
) -> str:
risotto_global = self.objectspace.rougailconfig['risotto_globals'][dns]
index = risotto_global['global:zones_name'].index(zone)
return risotto_global['global:server_names'][index]
def _set_provider_supplier(self,
provider_dns: str,
variable,
suppliers,
) -> None:
# suffix is None so not a client and there only one provider
# the value of this variable is the list of suppliers DNS name
if not variable.multi:
raise DictConsistencyError(f'"{variable.name}" is a provider and must be a multi', 0, variable.xmlfiles)
variable.default = list(suppliers)
# suffix is None so the supplier values are provider DNS
for sub_variable in suppliers.values():
#FIXME
#if hasattr(option, 'value'):
# raise DictConsistencyError(f'"{option.name}" is a supplier and cannot have value', 0, option.xmlfiles)
if sub_variable.multi:
raise DictConsistencyError(f'"{sub_variable.name}" is a supplier and mustnot be a multi', 0, sub_variable.xmlfiles)
sub_variable.default = provider_dns
def convert_globals(self):
"""Convert providers global informations to default values or fills
"""
provider_prefix = 'global'
for provider_dns, providers_suffix in self.globals[provider_prefix].items():
for provider_suffix, variable in providers_suffix.items():
provider_name = f'{provider_prefix}:{provider_suffix}'
if provider_name not in self.objectspace.rougailconfig['risotto_globals'][provider_dns]:
raise DictConsistencyError(f'cannot find {provider_name} for variable {variable.path}, should be in {list(self.objectspace.rougailconfig["risotto_globals"][provider_dns])}', 0, variable.xmlfiles)
provider_values = self.objectspace.rougailconfig['risotto_globals'][provider_dns][provider_name]
if isinstance(provider_values, list) and self.objectspace.paths.is_dynamic(variable):
if variable.multi:
raise DictConsistencyError(f'variable {variable.path} has provider {provider_name} and is in dynamic family so must not be a multi', 0, variable.xmlfiles)
self._set_global_dynamic_option(variable, provider_values)
else:
if isinstance(provider_values, list) and not variable.multi:
raise DictConsistencyError(f'variable {variable.path} has provider {provider_name} which is a multi', 0, variable.xmlfiles)
if not isinstance(provider_values, list):
if variable.multi:
raise DictConsistencyError(f'variable {variable.path} has provider {provider_name} which is not a multi', 0, variable.xmlfiles)
provider_values = [provider_values]
variable.value = []
for provider_value in provider_values:
value = self.objectspace.value(variable.xmlfiles)
value.name = provider_value
if isinstance(provider_value, bool):
value.type = 'boolean'
variable.value.append(value)
def _set_global_dynamic_option(self,
variable: 'self.objectspace.Variable',
values: List[str],
):
fill = self.objectspace.fill(variable.xmlfiles)
new_target = self.objectspace.target(variable.xmlfiles)
new_target.name = variable.name
fill.target = [new_target]
fill.namespace = variable.namespace
fill.index = 0
fill.name = 'risotto_providers_global'
param1 = self.objectspace.param(variable.xmlfiles)
param1.text = values
param2 = self.objectspace.param(variable.xmlfiles)
param2.type = 'suffix'
fill.param = [param1, param2]
if not hasattr(self.objectspace.space.variables[variable.path_prefix].constraints, 'fill'):
self.objectspace.space.variables[variable.path_prefix].constraints.fill = []
self.objectspace.space.variables[variable.path_prefix].constraints.fill.append(fill)
class AddFill:
"""Add fill for variable
"""
def __init__(self,
objectspace,
provider_prefix,
provider_suffix,
provider_dns,
variable,
suppliers,
reverse,
path_prefix,
) -> None:
self.objectspace = objectspace
self.provider_dns = provider_dns
self.variable = variable
self.path_prefix = path_prefix
self.suppliers = suppliers
self.create_fill()
if reverse:
self.param_reverse()
else:
if self.objectspace.paths.is_dynamic(self.variable):
self.param_dynamic()
elif self.variable.multi:
self.param_multi()
else:
provider_name = f'{provider_prefix}:{provider_suffix}'
raise DictConsistencyError(f'provider "{provider_name}" options must be in dynamic option or must be a multiple', 0, self.variable.xmlfiles)
if self.objectspace.paths.is_follower(self.variable):
self.param_follower()
self.end()
def create_fill(self) -> None:
self.fill = self.objectspace.fill(self.variable.xmlfiles)
new_target = self.objectspace.target(self.variable.xmlfiles)
new_target.name = self.variable
self.fill.target = [new_target]
self.fill.namespace = self.variable.namespace
self.fill.index = 0
self.fill.param = []
def param_reverse(self) -> None:
self.fill.name = 'risotto_flatten_values_client'
#
if self.objectspace.paths.is_follower(self.variable):
multi = self.variable.multi is True
else:
multi = self.variable.multi is not False
param = self.objectspace.param(self.variable.xmlfiles)
param.text = multi
param.type = 'boolean'
self.fill.param.append(param)
for dns, variable in self.suppliers.items():
param = self.objectspace.param(variable.xmlfiles)
param.text = variable
param.propertyerror = False
param.type = 'variable'
param.suffix = normalize_family(self.provider_dns)
namespace = variable.namespace
family_path = self.objectspace.paths.get_variable_family_path(param.text.path,
namespace,
force_path_prefix=self.variable.path_prefix,
)
param.family = self.objectspace.paths.get_family(family_path,
namespace,
self.variable.path_prefix,
)
self.fill.param.append(param)
def param_dynamic(self) -> None:
self.fill.name = 'risotto_dyn_values'
#
param = self.objectspace.param(self.variable.xmlfiles)
param.type = 'suffix'
self.fill.param.append(param)
#
param = self.objectspace.param(self.variable.xmlfiles)
param.text = self.variable.unique != "False"
param.type = 'boolean'
self.fill.param.append(param)
#
if self.objectspace.paths.is_follower(self.variable):
multi = self.variable.multi is True
else:
multi = self.variable.multi is not False
param = self.objectspace.param(self.variable.xmlfiles)
param.text = multi
param.type = 'boolean'
self.fill.param.append(param)
for dns, variable in self.suppliers.items():
#
param = self.objectspace.param(variable.xmlfiles)
param.text = variable
param.name = normalize_family(dns)
param.propertyerror = False
param.type = 'variable'
self.fill.param.append(param)
def param_multi(self) -> None:
self.fill.name = 'risotto_flatten_values'
#
if self.objectspace.paths.is_follower(self.variable):
multi = self.variable.multi is True
else:
multi = self.variable.multi is not False
param = self.objectspace.param(self.variable.xmlfiles)
param.text = multi
param.type = 'boolean'
self.fill.param.append(param)
for dns, variable in self.suppliers.items():
param = self.objectspace.param(variable.xmlfiles)
param.text = variable
param.propertyerror = False
param.type = 'variable'
self.fill.param.append(param)
def param_follower(self):
param = self.objectspace.param(self.variable.xmlfiles)
param.name = 'follower_index'
param.type = 'index'
self.fill.param.append(param)
def end(self):
if not hasattr(self.objectspace.space.variables[self.path_prefix], 'constraints'):
self.objectspace.space.variables[self.path_prefix].constraints = self.objectspace.constraints(None)
if not hasattr(self.objectspace.space.variables[self.path_prefix].constraints, 'fill'):
self.objectspace.space.variables[self.path_prefix].constraints.fill = []
self.objectspace.space.variables[self.path_prefix].constraints.fill.append(self.fill)

View file

@ -0,0 +1,61 @@
from risotto.utils import multi_function as _multi_function
from rougail.utils import normalize_family
from tiramisu import valid_network_netmask, valid_ip_netmask, valid_broadcast, valid_in_network, valid_not_equal, calc_value, calc_value_property_help
@_multi_function
def risotto_providers_global(value, suffix=None):
if suffix is not None:
return value[int(suffix)]
return value
@_multi_function
def risotto_flatten_values(multi, *args, follower_index=None):
values = []
#if follower_index is None:
# for arg in args:
# if isinstance(arg, list):
# values.extend(arg)
# else:
# values.append(arg)
#else:
values = args
if follower_index is not None and len(values) > follower_index:
values = values[follower_index]
elif not multi:
if not values:
values = None
if len(values) == 1:
values = values[0]
return values
@_multi_function
def risotto_flatten_values_client(multi, *args):
values = []
for arg in args:
if isinstance(arg, list):
values.extend(arg)
else:
values.append(arg)
if not multi:
if not values:
values = None
if len(values) == 1:
values = values[0]
return values
@_multi_function
def risotto_dyn_values(suffix, unique, multi, follower_index=None, **kwargs):
values = kwargs.get(normalize_family(suffix), [] if multi else None)
if not multi and follower_index is not None and isinstance(values, list) and len(values) > follower_index:
values = values[follower_index]
if isinstance(values, list) and unique:
values_ = []
for val in values:
if val not in values_:
values_.append(val)
values = values_
return values

View file

@ -1,5 +1,31 @@
from os import environ, makedirs
from os.path import isfile, join, isdir
from typing import List
from ipaddress import ip_address
from toml import load as toml_load
from json import load, dump
from json.decoder import JSONDecodeError
from pprint import pprint
MULTI_FUNCTIONS = []
CONFIGS = {}
EXTRA_ANNOTATORS = ['risotto.rougail']
ROUGAIL_NAMESPACE = 'general'
ROUGAIL_NAMESPACE_DESCRIPTION = 'Général'
HERE = environ['PWD']
IP_DIR = join(HERE, 'ip')
# custom filters from dataset
custom_filters = {}
config_file = environ.get('CONFIG_FILE', 'risotto.conf')
if isfile(config_file):
with open(config_file, 'r') as fh:
RISOTTO_CONFIG = toml_load(fh)
else:
RISOTTO_CONFIG = {}
def _(s):
@ -12,3 +38,61 @@ def multi_function(function):
if name not in MULTI_FUNCTIONS:
MULTI_FUNCTIONS.append(name)
return function
def value_pprint(dico, config):
pprint_dict = {}
for path, value in dico.items():
if config.option(path).type() == 'password' and value:
value = 'X' * len(value)
pprint_dict[path] = value
pprint(pprint_dict)
def load_zones(zones, hosts):
if not isdir(IP_DIR):
makedirs(IP_DIR)
json_file = join(IP_DIR, 'zones.json')
if isfile(json_file):
try:
with open(json_file, 'r') as fh:
ori_zones_ip = load(fh)
except JSONDecodeError:
ori_zones_ip = {}
else:
ori_zones_ip = {}
new_zones_ip = {}
# cache, machine should not change IP
for host_name, dhosts in hosts.items():
for server_name, server in dhosts['servers'].items():
server_zones = server['zones_name']
for idx, zone_name in enumerate(server_zones):
zone = zones[zone_name]
zone.setdefault('hosts', {})
if zone_name not in new_zones_ip:
new_zones_ip[zone_name] = {}
if zone_name in ori_zones_ip and server_name in ori_zones_ip[zone_name]:
server_index = ori_zones_ip[zone_name][server_name]
if server_index >= zone['length']:
server_index = None
elif server_index in new_zones_ip[zone_name].values():
server_index = None
else:
server_index = None
new_zones_ip[zone_name][server_name] = server_index
for zone_name, servers in new_zones_ip.items():
for server_name, server_idx in servers.items():
if server_idx is not None:
continue
for new_idx in range(zones[zone_name]['length']):
if new_idx not in new_zones_ip[zone_name].values():
new_zones_ip[zone_name][server_name] = new_idx
break
else:
raise Exception(f'cannot find free IP in zone "{zone_name}" for "{server_name}"')
for zone_name, servers in new_zones_ip.items():
start_ip = ip_address(zones[zone_name]['start_ip'])
for server_name, server_index in servers.items():
zones[zone_name]['hosts'][server_name] = str(start_ip + server_index)
with open(json_file, 'w') as fh:
dump(new_zones_ip, fh)

View file

@ -1,254 +0,0 @@
from OpenSSL.crypto import load_certificate, load_privatekey, dump_certificate, dump_privatekey, dump_publickey, PKey, X509, X509Extension, TYPE_RSA, FILETYPE_PEM
from os import makedirs, symlink
from os.path import join, isdir, isfile, exists
#from shutil import rmtree
from datetime import datetime
PKI_DIR = 'pki/x509'
#FIXME
EMAIL = 'gnunux@gnunux.info'
COUNTRY = 'FR'
LOCALITY = 'Dijon'
STATE = 'France'
ORG_NAME = 'Cadoles'
ORG_UNIT_NAME = 'CSS'
def _gen_key_pair():
key = PKey()
key.generate_key(TYPE_RSA, 4096)
return key
def _gen_cert(is_ca,
common_names,
serial_number,
validity_end_in_seconds,
key_file,
cert_file,
type=None,
ca_cert=None,
ca_key=None,
email_address=None,
country_name=None,
locality_name=None,
state_or_province_name=None,
organization_name=None,
organization_unit_name=None,
):
#can look at generated file using openssl:
#openssl x509 -inform pem -in selfsigned.crt -noout -text
# create a key pair
if isfile(key_file):
with open(key_file) as fh:
filecontent = bytes(fh.read(), 'utf-8')
key = load_privatekey(FILETYPE_PEM, filecontent)
else:
key = _gen_key_pair()
cert = X509()
cert.set_version(2)
cert.get_subject().C = country_name
cert.get_subject().ST = state_or_province_name
cert.get_subject().L = locality_name
cert.get_subject().O = organization_name
cert.get_subject().OU = organization_unit_name
cert.get_subject().CN = common_names[0]
cert.get_subject().emailAddress = email_address
cert_ext = []
if not is_ca:
cert_ext.append(X509Extension(b'basicConstraints', False, b'CA:FALSE'))
cert_ext.append(X509Extension(b'keyUsage', True, b'digitalSignature, keyEncipherment'))
cert_ext.append(X509Extension(b'subjectAltName', False, ", ".join([f'DNS:{common_name}' for common_name in common_names]).encode('ascii')))
if type == 'server':
cert_ext.append(X509Extension(b'extendedKeyUsage', True, b'serverAuth'))
else:
cert_ext.append(X509Extension(b'extendedKeyUsage', True, b'clientAuth'))
else:
cert_ext.append(X509Extension(b'basicConstraints', False, b'CA:TRUE'))
cert_ext.append(X509Extension(b"keyUsage", True, b'keyCertSign, cRLSign'))
cert_ext.append(X509Extension(b'subjectAltName', False, f'email:{email_address}'.encode()))
cert_ext.append(X509Extension(b'subjectKeyIdentifier', False, b"hash", subject=cert))
cert.add_extensions(cert_ext)
cert.set_serial_number(serial_number)
cert.gmtime_adj_notBefore(0)
cert.gmtime_adj_notAfter(validity_end_in_seconds)
if is_ca:
ca_cert = cert
ca_key = key
else:
with open(ca_cert) as fh:
filecontent = bytes(fh.read(), 'utf-8')
ca_cert = load_certificate(FILETYPE_PEM, filecontent)
with open(ca_key) as fh:
filecontent = bytes(fh.read(), 'utf-8')
ca_key = load_privatekey(FILETYPE_PEM, filecontent)
cert.set_issuer(ca_cert.get_subject())
cert.add_extensions([X509Extension(b"authorityKeyIdentifier", False, b'keyid:always', issuer=ca_cert)])
cert.set_pubkey(key)
cert.sign(ca_key, "sha512")
with open(cert_file, "wt") as f:
f.write(dump_certificate(FILETYPE_PEM, cert).decode("utf-8"))
with open(key_file, "wt") as f:
f.write(dump_privatekey(FILETYPE_PEM, key).decode("utf-8"))
def gen_ca(authority_dns,
authority_name,
base_dir,
):
authority_cn = authority_name + '+' + authority_dns
week_number = datetime.now().isocalendar().week
root_dir_name = join(base_dir, PKI_DIR, authority_cn)
ca_dir_name = join(root_dir_name, 'ca')
sn_ca_name = join(ca_dir_name, 'serial_number')
key_ca_name = join(ca_dir_name, 'private.key')
cert_ca_name = join(ca_dir_name, f'certificate_{week_number}.crt')
if not isfile(cert_ca_name):
if not isdir(ca_dir_name):
# rmtree(ca_dir_name)
makedirs(ca_dir_name)
if isfile(sn_ca_name):
with open(sn_ca_name, 'r') as fh:
serial_number = int(fh.read().strip()) + 1
else:
serial_number = 0
_gen_cert(True,
[authority_cn],
serial_number,
10*24*60*60,
key_ca_name,
cert_ca_name,
email_address=EMAIL,
country_name=COUNTRY,
locality_name=LOCALITY,
state_or_province_name=STATE,
organization_name=ORG_NAME,
organization_unit_name=ORG_UNIT_NAME,
)
with open(sn_ca_name, 'w') as fh:
fh.write(str(serial_number))
with open(cert_ca_name, 'r') as fh:
return fh.read().strip()
def gen_cert_iter(cn,
extra_domainnames,
authority_cn,
authority_name,
type,
base_dir,
dir_name,
):
week_number = datetime.now().isocalendar().week
root_dir_name = join(base_dir, PKI_DIR, authority_cn)
ca_dir_name = join(root_dir_name, 'ca')
key_ca_name = join(ca_dir_name, 'private.key')
cert_ca_name = join(ca_dir_name, f'certificate_{week_number}.crt')
sn_name = join(dir_name, f'serial_number')
key_name = join(dir_name, f'private.key')
cert_name = join(dir_name, f'certificate_{week_number}.crt')
if not isfile(cert_ca_name):
raise Exception(f'cannot find CA file "{cert_ca_name}"')
if not isfile(cert_name):
if not isdir(dir_name):
makedirs(dir_name)
if isfile(sn_name):
with open(sn_name, 'r') as fh:
serial_number = int(fh.read().strip()) + 1
else:
serial_number = 0
common_names = [cn]
common_names.extend(extra_domainnames)
_gen_cert(False,
common_names,
serial_number,
10*24*60*60,
key_name,
cert_name,
ca_cert=cert_ca_name,
ca_key=key_ca_name,
type=type,
email_address=EMAIL,
country_name=COUNTRY,
locality_name=LOCALITY,
state_or_province_name=STATE,
organization_name=ORG_NAME,
organization_unit_name=ORG_UNIT_NAME,
)
with open(sn_name, 'w') as fh:
fh.write(str(serial_number))
for extra in extra_domainnames:
extra_dir_name = join(base_dir, PKI_DIR, authority_name + '+' + extra)
if not exists(extra_dir_name):
symlink(root_dir_name, extra_dir_name)
for extra in extra_domainnames:
extra_dir_name = join(base_dir, PKI_DIR, authority_name + '+' + extra)
if not exists(extra_dir_name):
raise Exception(f'file {extra_dir_name} not already exists that means subjectAltName is not set in certificat, please remove {cert_name}')
return cert_name
def gen_cert(cn,
extra_domainnames,
authority_cn,
authority_name,
type,
file_type,
base_dir,
):
if '.' in authority_name:
raise Exception(f'dot is not allowed in authority_name "{authority_name}"')
if type == 'server' and authority_cn is None:
authority_cn = cn
if authority_cn is None:
raise Exception(f'authority_cn is mandatory when authority type is client')
if extra_domainnames is None:
extra_domainnames = []
auth_cn = authority_name + '+' + authority_cn
dir_name = join(base_dir, PKI_DIR, auth_cn, 'certificats', cn, type)
if file_type == 'crt':
filename = gen_cert_iter(cn,
extra_domainnames,
auth_cn,
authority_name,
type,
base_dir,
dir_name,
)
else:
filename = join(dir_name, f'private.key')
with open(filename, 'r') as fh:
return fh.read().strip()
def has_pub(cn,
base_dir,
):
dir_name = join(base_dir, PKI_DIR, 'public', cn)
cert_name = join(dir_name, f'public.pub')
return isfile(cert_name)
def gen_pub(cn,
file_type,
base_dir,
):
dir_name = join(base_dir, PKI_DIR, 'public', cn)
key_name = join(dir_name, f'private.key')
if file_type == 'pub':
pub_name = join(dir_name, f'public.pub')
if not isfile(pub_name):
if not isdir(dir_name):
makedirs(dir_name)
key = _gen_key_pair()
with open(pub_name, "wt") as f:
f.write(dump_publickey(FILETYPE_PEM, key).decode("utf-8"))
with open(key_name, "wt") as f:
f.write(dump_privatekey(FILETYPE_PEM, key).decode("utf-8"))
filename = pub_name
else:
filename = key_name
with open(filename, 'r') as fh:
return fh.read().strip()

468
test.py
View file

@ -1,468 +0,0 @@
#!/usr/bin/env python3
from asyncio import run
from os import listdir, link, makedirs, environ
from os.path import isdir, isfile, join
from shutil import rmtree, copy2, copytree
from json import load as json_load
from yaml import load, SafeLoader
from toml import load as toml_load
from pprint import pprint
from typing import Any
from warnings import warn_explicit
from copy import copy
from tiramisu import Config
from tiramisu.error import ValueWarning
from rougail import RougailConfig, RougailConvert, RougailSystemdTemplate
from rougail.utils import normalize_family
from risotto.utils import MULTI_FUNCTIONS, CONFIGS
with open(environ.get('CONFIG_FILE', 'risotto.conf'), 'r') as fh:
config = toml_load(fh)
DATASET_DIRECTORY = config['directories']['dataset']
INSTALL_DIR = config['directories']['dest']
FUNCTIONS = 'funcs.py'
CONFIG_DEST_DIR = 'configurations'
SRV_DEST_DIR = 'srv'
with open('servers.json', 'r') as server_fh:
jsonfile = json_load(server_fh)
SERVERS = jsonfile['servers']
MODULES = jsonfile['modules']
async def set_linked(linked_server: str,
linked_provider: str,
linked_value: str,
linked_returns: str=None,
dynamic: str=None,
):
if None in (linked_server, linked_provider, linked_value):
return
if linked_server not in CONFIGS:
warn_explicit(ValueWarning(f'cannot find linked server "{linked_server}"'),
ValueWarning,
__file__,
0,
)
return
config = CONFIGS[linked_server][0]
path = await config.information.get('provider:' + linked_provider, None)
if not path:
warn_explicit(ValueWarning(f'cannot find provider "{linked_provider}" in linked server "{linked_server}"'),
ValueWarning,
__file__,
0,
)
return
await config.property.read_write()
try:
option = config.forcepermissive.option(path)
if await option.option.ismulti():
values = await option.value.get()
if linked_value not in values:
values.append(linked_value)
await option.value.set(values)
else:
await option.value.set(linked_value)
except Exception as err:
await config.property.read_only()
raise err from err
await config.property.read_only()
if linked_returns is not None:
linked_variable = await config.information.get('provider:' + linked_returns, None)
if not linked_variable:
warn_explicit(ValueWarning(f'cannot find linked variable "{linked_returns}" in linked server "{linked_server}"'),
ValueWarning,
__file__,
0,
)
return
else:
linked_variable = None
if linked_variable is not None:
if dynamic:
linked_variable = linked_variable.replace('{suffix}', normalize_family(dynamic))
elif '{suffix}' in linked_variable:
idx = CONFIGS[linked_server][3]
linked_variable = linked_variable.replace('{suffix}', str(idx))
ret = await config.forcepermissive.option(linked_variable).value.get()
else:
ret = normalize_family(linked_value)
return ret
async def get_linked_configuration(linked_server: str,
linked_provider: str,
dynamic: str=None,
):
if linked_server not in CONFIGS:
warn_explicit(ValueWarning(f'cannot find linked server "{linked_server}"'),
ValueWarning,
__file__,
1,
)
return
config = CONFIGS[linked_server][0]
path = await config.information.get('provider:' + linked_provider, None)
if not path:
warn_explicit(ValueWarning(f'cannot find variable "{path}" in linked server "{linked_server}"'),
ValueWarning,
__file__,
1,
)
return
if dynamic:
path = path.replace('{suffix}', normalize_family(dynamic))
try:
return await config.forcepermissive.option(path).value.get()
except AttributeError as err:
warn_explicit(ValueWarning(f'cannot find get value of "{path}" in linked server "{linked_server}": {err}'),
ValueWarning,
__file__,
1,
)
class Empty:
pass
empty = Empty()
async def set_linked_configuration(_linked_value: Any,
linked_server: str,
linked_provider: str,
linked_value: Any=empty,
dynamic: str=None,
leader_provider: str=None,
leader_value: Any=None,
leader_index: int=None,
):
if linked_value is not empty:
_linked_value = linked_value
linked_value = _linked_value
if linked_server is None:
return
if linked_value is None or linked_server not in CONFIGS:
warn_explicit(ValueWarning(f'cannot find linked server "{linked_server}"'),
ValueWarning,
__file__,
2,
)
return
config = CONFIGS[linked_server][0]
path = await config.information.get('provider:' + linked_provider, None)
if not path:
warn_explicit(ValueWarning(f'cannot find variable "{path}" in linked server "{linked_server}"'),
ValueWarning,
__file__,
2,
)
return
if dynamic:
path = path.replace('{suffix}', normalize_family(dynamic))
await config.property.read_write()
try:
if leader_provider is not None:
leader_path = await config.information.get('provider:' + leader_provider, None)
if not leader_path:
await config.property.read_only()
warn_explicit(ValueWarning(f'cannot find leader variable with leader_provider "{leader_provider}" in linked server "{linked_server}"'),
ValueWarning,
__file__,
2,
)
return
if dynamic:
leader_path = leader_path.replace('{suffix}', normalize_family(dynamic))
values = await config.forcepermissive.option(leader_path).value.get()
if not isinstance(leader_value, list):
leader_value = [leader_value]
for lv in leader_value:
if lv in values:
slave_idx = values.index(lv)
slave_option = config.forcepermissive.option(path, slave_idx)
if await slave_option.option.issubmulti():
slave_values = await slave_option.value.get()
if linked_value not in slave_values:
slave_values.append(linked_value)
await slave_option.value.set(slave_values)
else:
await slave_option.value.set(linked_value)
else:
option = config.forcepermissive.option(path, leader_index)
if leader_index is None and await option.option.ismulti() and not isinstance(linked_value, list):
values = await option.value.get()
if linked_value not in values:
values.append(linked_value)
await option.value.set(values)
else:
await option.value.set(linked_value)
except AttributeError as err:
#raise ValueError(str(err)) from err
pass
except Exception as err:
await config.property.read_only()
raise err from err
await config.property.read_only()
def tiramisu_display_name(kls,
dyn_name: 'Base'=None,
suffix: str=None,
) -> str:
if dyn_name is not None:
name = kls.impl_getpath() + suffix
else:
name = kls.impl_getpath()
return name
def load_applications():
applications = {}
distrib_dir = join(DATASET_DIRECTORY, 'applicationservice')
for release in listdir(distrib_dir):
release_dir = join(distrib_dir, release)
if not isdir(release_dir):
continue
for applicationservice in listdir(release_dir):
applicationservice_dir = join(release_dir, applicationservice)
if not isdir(applicationservice_dir):
continue
if applicationservice in applications:
raise Exception(f'multi applicationservice: {applicationservice} ({applicationservice_dir} <=> {applications[applicationservice]})')
applications[applicationservice] = applicationservice_dir
return applications
class ModuleCfg():
def __init__(self):
self.dictionaries_dir = []
self.modules = []
self.functions_file = [FUNCTIONS]
self.templates_dir = []
self.extra_dictionaries = {}
self.servers = []
def build_module(module_name, datas, module_infos):
install_dir = join(INSTALL_DIR, module_name)
makedirs(install_dir)
applications = load_applications()
cfg = ModuleCfg()
module_infos[module_name] = cfg
def calc_depends(appname, added):
if appname in added:
return
as_dir = applications[appname]
cfg.modules.append(appname)
dictionaries_dir = join(as_dir, 'dictionaries')
if isdir(dictionaries_dir):
cfg.dictionaries_dir.append(dictionaries_dir)
funcs_dir = join(as_dir, 'funcs')
if isdir(funcs_dir):
for f in listdir(funcs_dir):
if f.startswith('__'):
continue
cfg.functions_file.append(join(funcs_dir, f))
templates_dir = join(as_dir, 'templates')
if isdir(templates_dir):
cfg.templates_dir.append(templates_dir)
extras_dir = join(as_dir, 'extras')
if isdir(extras_dir):
for extra in listdir(extras_dir):
extra_dir = join(extras_dir, extra)
if isdir(extra_dir):
cfg.extra_dictionaries.setdefault(extra, []).append(extra_dir)
for type in ['image', 'install']:
manual_dir = join(as_dir, 'manual', type)
if isdir(manual_dir):
for filename in listdir(manual_dir):
src_file = join(manual_dir, filename)
if type == 'image':
dst_file = join(install_dir, filename)
verify = False
else:
dst_file= join(INSTALL_DIR, filename)
verify = True
if isdir(src_file):
if not isdir(dst_file):
makedirs(dst_file)
for subfilename in listdir(src_file):
if not verify or not isfile(dst_file):
src = join(src_file, subfilename)
dst = join(dst_file, subfilename)
if isfile(src):
copy2(src, dst)
else:
copytree(src, dst)
elif not verify or not isfile(dst_file):
src = join(manual_dir, filename)
dst = dst_file
if isfile(src):
copy2(src, dst)
else:
copytree(src, dst)
added.append(appname)
with open(join(as_dir, 'applicationservice.yml')) as yaml:
app = load(yaml, Loader=SafeLoader)
for xml in app.get('depends', []):
calc_depends(xml, added)
added = []
for applicationservice in datas['applicationservices']:
calc_depends(applicationservice, added)
async def build(server_name, datas, module_infos):
if server_name in CONFIGS:
raise Exception(f'server "{server_name}" is duplicate')
cfg = RougailConfig.copy()
module_info = module_infos[datas['module']]
module_info.servers.append(server_name)
if datas['module'] == 'host':
cfg['tmpfile_dest_dir'] = datas['values']['rougail.host_install_dir'] + '/host/configurations/' + server_name
cfg['templates_dir'] = module_info.templates_dir
cfg['dictionaries_dir'] = module_info.dictionaries_dir
cfg['functions_file'] = module_info.functions_file
cfg['multi_functions'] = MULTI_FUNCTIONS
cfg['extra_dictionaries'] = module_info.extra_dictionaries
cfg['extra_annotators'].append('risotto.rougail')
optiondescription = {'set_linked': set_linked,
'get_linked_configuration': get_linked_configuration,
'set_linked_configuration': set_linked_configuration,
}
cfg['internal_functions'] = list(optiondescription.keys())
try:
eolobj = RougailConvert(cfg)
except Exception as err:
print(f'Try to load {module_info.modules}')
raise err from err
tiram_obj = eolobj.save(None)
# if server_name == 'revprox.in.silique.fr':
# print(tiram_obj)
#cfg['patches_dir'] = join(test_dir, 'patches')
cfg['tmp_dir'] = 'tmp'
cfg['destinations_dir'] = join(INSTALL_DIR, datas['module'], CONFIG_DEST_DIR, server_name)
if isdir('tmp'):
rmtree('tmp')
makedirs('tmp')
makedirs(cfg['destinations_dir'])
try:
exec(tiram_obj, None, optiondescription)
except Exception as err:
print(tiram_obj)
raise Exception(f'unknown error when load tiramisu object {err}') from err
config = await Config(optiondescription['option_0'], display_name=tiramisu_display_name)
await config.property.read_write()
try:
if await config.option('machine.add_srv').value.get():
srv = join(INSTALL_DIR, SRV_DEST_DIR, server_name)
else:
srv = None
except AttributeError:
srv = None
await config.property.read_write()
CONFIGS[server_name] = (config, cfg, srv, 0)
async def value_pprint(dico, config):
pprint_dict = {}
for path, value in dico.items():
if await config.option(path).option.type() == 'password' and value:
value = 'X' * len(value)
pprint_dict[path] = value
pprint(pprint_dict)
async def set_values(server_name, config, datas):
if 'informations' in datas:
for information, value in datas['informations'].items():
await config.information.set(information, value)
if 'extra_domainnames' in datas['informations']:
for idx, extra_domainname in enumerate(datas['informations']['extra_domainnames']):
if extra_domainname in CONFIGS:
raise Exception(f'server "{server_name}" is duplicate')
value = list(CONFIGS[server_name])
value[3] = idx + 1
CONFIGS[extra_domainname] = tuple(value)
await config.information.set('server_name', server_name)
await config.property.read_write()
try:
if 'values' in datas:
for path, value in datas['values'].items():
if isinstance(value, dict):
for idx, val in value.items():
await config.option(path, int(idx)).value.set(val)
else:
await config.option(path).value.set(value)
except Exception as err:
await value_pprint(await config.value.dict(), config)
error_msg = f'cannot configure server "{server_name}": {err}'
raise Exception(error_msg) from err
await config.property.read_only()
#await config.value.dict()
async def valid_mandatories(server_name, config):
mandatories = await config.value.mandatory()
if mandatories:
print()
print(f'=== Configuration: {server_name} ===')
await config.property.pop('mandatory')
await value_pprint(await config.value.dict(), config)
raise Exception(f'server "{server_name}" has mandatories variables without values "{", ".join(mandatories)}"')
async def templates(server_name, config, cfg, srv, int_idx):
values = await config.value.dict()
engine = RougailSystemdTemplate(config, cfg)
# if server_name == 'revprox.in.silique.fr':
# print()
# print(f'=== Configuration: {server_name} ===')
# pprint(values)
try:
await engine.instance_files()
except Exception as err:
print()
print(f'=== Configuration: {server_name} ===')
await value_pprint(values, config)
raise err from err
if srv:
makedirs(srv)
async def main():
if isdir(INSTALL_DIR):
rmtree(INSTALL_DIR)
makedirs(INSTALL_DIR)
module_infos = {}
for module_name, datas in MODULES.items():
build_module(module_name, datas, module_infos)
for server_name, datas in SERVERS.items():
await build(server_name, datas, module_infos)
for module_name, cfg in module_infos.items():
with open(join(INSTALL_DIR, module_name, 'install_machines'), 'w') as fh:
for server_name in cfg.servers:
fh.write(f'./install_machine {module_name} {server_name}\n')
for server_name, datas in SERVERS.items():
await set_values(server_name, CONFIGS[server_name][0], datas)
for server_name in SERVERS:
config = CONFIGS[server_name][0]
await config.property.pop('mandatory')
await config.value.dict()
await config.property.add('mandatory')
for server_name in SERVERS:
await valid_mandatories(server_name, CONFIGS[server_name][0])
# print(await CONFIGS['revprox.in.gnunux.info'][0].option('nginx.reverse_proxy_for_netbox_in_gnunux_info.reverse_proxy_netbox_in_gnunux_info.revprox_url_netbox_in_gnunux_info', 0).value.get())
for server_name in SERVERS:
await templates(server_name, *CONFIGS[server_name])
run(main())