[PATCH v3 2/4] TaskAPI: add get_configuration to HostAPI
by Jan Tluka
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
lnst/Controller/Task.py | 3 +++
1 file changed, 3 insertions(+)
diff --git a/lnst/Controller/Task.py b/lnst/Controller/Task.py
index 9f21ffc..54c9d3f 100644
--- a/lnst/Controller/Task.py
+++ b/lnst/Controller/Task.py
@@ -193,6 +193,9 @@ class HostAPI(object):
def get_id(self):
return self._m.get_id()
+ def get_configuration(self):
+ return self._m.get_configuration()
+
def config(self, option, value, persistent=False, netns=None):
"""
Configure an option in /sys or /proc on the host.
--
2.4.3
8 years
[PATCH v3 1/4] TaskAPI: add set_comment method to PerfRepoResult
object
by Jan Tluka
This patch adds set_comment() method to PerfRepoResultAPI so that user can
add a comment to a test execution.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
lnst/Controller/Task.py | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/lnst/Controller/Task.py b/lnst/Controller/Task.py
index 5462d44..9f21ffc 100644
--- a/lnst/Controller/Task.py
+++ b/lnst/Controller/Task.py
@@ -936,6 +936,10 @@ class PerfRepoResult(object):
def set_hash_ignore(self, hash_ignore):
self._hash_ignore = hash_ignore
+ def set_comment(self, comment):
+ if comment:
+ self._testExecution.set_comment(comment)
+
def get_hash_ignore(self):
return self._hash_ignore
--
2.4.3
8 years
[PATCH v2 3/4] RecipeCommon: added PerfRepo helper functions module
for python tasks
by Jan Tluka
This module provides functions that are related to PerfRepo. Currently
contains just one function generate_perfrepo_comment() that returns
string containing various information such as kernel versions on the
hosts passed as argument, if running in Beaker environment the job url
and additional user specified string to be included in comment. This
function will be used in phase1/phase2 regression tests.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
lnst/RecipeCommon/PerfRepo.py | 43 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
create mode 100644 lnst/RecipeCommon/PerfRepo.py
diff --git a/lnst/RecipeCommon/PerfRepo.py b/lnst/RecipeCommon/PerfRepo.py
new file mode 100644
index 0000000..608ea58
--- /dev/null
+++ b/lnst/RecipeCommon/PerfRepo.py
@@ -0,0 +1,43 @@
+"""
+This module defines helper functions for interacting with PerfRepo
+that can be imported directly into LNST Python tasks.
+
+Copyright 2016 Red Hat, Inc.
+Licensed under the GNU General Public License, version 2 as
+published by the Free Software Foundation; see COPYING for details.
+"""
+
+__author__ = """
+jtluka(a)redhat.com (Jan Tluka)
+"""
+
+import os
+
+
+'''
+Prepare the PerfRepo comment. By default it will include kernel versions
+used on the hosts and Beaker job url.
+
+hosts: list of HostAPI objects
+user_comment: additional user specified comment
+'''
+
+def generate_perfrepo_comment(hosts=[], user_comment=None):
+ comment = ""
+
+ for host in hosts:
+ host_cfg = host.get_configuration()
+ comment += "Kernel (%s): %s<BR>" % \
+ (host_cfg['id'], host_cfg['kernel_release'])
+
+ # if we're running in Beaker environment, include job url
+ if 'BEAKER' in os.environ and 'JOBID' in os.environ:
+ bkr_server = os.environ['BEAKER']
+ bkr_jobid = os.environ['JOBID']
+ bkr_job_url = bkr_server + bkr_jobid
+ comment += "Beaker job: %s<BR>" % bkr_job_url
+
+ if user_comment:
+ comment += user_comment
+
+ return comment
--
2.4.3
8 years
[PATCH v2 4/4] recipes: add test execution comments in regression
tests
by Jan Tluka
All phase1/phase2 tests will now include comment in perfrepo test executions.
The comment will include kernels of both baremetal and guest test machines.
User can also specify alias 'perfrepo_comment' that will be appended to
automatically generated comment.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
recipes/regression_tests/phase1/3_vlans.py | 8 ++++++++
recipes/regression_tests/phase1/3_vlans_over_bond.py | 8 ++++++++
recipes/regression_tests/phase1/bonding_test.py | 8 ++++++++
recipes/regression_tests/phase1/simple_netperf.py | 8 ++++++++
.../phase1/virtual_bridge_2_vlans_over_bond.py | 8 ++++++++
.../regression_tests/phase1/virtual_bridge_vlan_in_guest.py | 8 ++++++++
.../regression_tests/phase1/virtual_bridge_vlan_in_host.py | 8 ++++++++
recipes/regression_tests/phase2/3_vlans_over_team.py | 8 ++++++++
recipes/regression_tests/phase2/team_test.py | 12 ++++++++++++
.../virtual_ovs_bridge_2_vlans_over_active_backup_bond.py | 8 ++++++++
.../phase2/virtual_ovs_bridge_vlan_in_guest.py | 8 ++++++++
.../phase2/virtual_ovs_bridge_vlan_in_host.py | 8 ++++++++
12 files changed, 100 insertions(+)
diff --git a/recipes/regression_tests/phase1/3_vlans.py b/recipes/regression_tests/phase1/3_vlans.py
index 78eedcf..b9420b6 100644
--- a/recipes/regression_tests/phase1/3_vlans.py
+++ b/recipes/regression_tests/phase1/3_vlans.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -41,6 +42,9 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment)
m1_phy1 = m1.get_interface("eth1")
m1_phy1.set_mtu(mtu)
@@ -220,6 +224,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -244,6 +249,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
@@ -274,6 +280,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -298,6 +305,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
diff --git a/recipes/regression_tests/phase1/3_vlans_over_bond.py b/recipes/regression_tests/phase1/3_vlans_over_bond.py
index ff83121..07d5f5c 100644
--- a/recipes/regression_tests/phase1/3_vlans_over_bond.py
+++ b/recipes/regression_tests/phase1/3_vlans_over_bond.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -40,6 +41,9 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment)
m1_bond = m1.get_interface("test_bond")
m1_bond.set_mtu(mtu)
@@ -219,6 +223,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -243,6 +248,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
@@ -273,6 +279,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -297,6 +304,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
diff --git a/recipes/regression_tests/phase1/bonding_test.py b/recipes/regression_tests/phase1/bonding_test.py
index e125fb9..d56955d 100644
--- a/recipes/regression_tests/phase1/bonding_test.py
+++ b/recipes/regression_tests/phase1/bonding_test.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -40,6 +41,9 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment)
test_if1 = m1.get_interface("test_if")
test_if1.set_mtu(mtu)
@@ -197,6 +201,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -219,6 +224,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -249,6 +255,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -271,6 +278,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase1/simple_netperf.py b/recipes/regression_tests/phase1/simple_netperf.py
index 407ee5d..baf3c3c 100644
--- a/recipes/regression_tests/phase1/simple_netperf.py
+++ b/recipes/regression_tests/phase1/simple_netperf.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -39,6 +40,9 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment)
m1_testiface = m1.get_interface("testiface")
m2_testiface = m2.get_interface("testiface")
@@ -165,6 +169,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -187,6 +192,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
@@ -213,6 +219,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -235,6 +242,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
index 598ee7b..6b79dff 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -45,6 +46,9 @@ nperf_max_runs = int(ctl.get_alias("nperf_max_runs"))
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([h1, g1, g2, h2, g3, g4], pr_user_comment)
mtu = ctl.get_alias("mtu")
enable_udp_perf = ctl.get_alias("enable_udp_perf")
@@ -275,6 +279,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
if enable_udp_perf is not None:
@@ -301,6 +306,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -337,6 +343,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -363,6 +370,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py
index d3e4790..cf83070 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -41,6 +42,9 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([h1, g1, h2], pr_user_comment)
mtu = ctl.get_alias("mtu")
enable_udp_perf = ctl.get_alias("enable_udp_perf")
@@ -211,6 +215,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -237,6 +242,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -270,6 +276,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -296,6 +303,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py
index 6070beb..2ee4c60 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -41,6 +42,9 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([h1, g1, h2], pr_user_comment)
mtu = ctl.get_alias("mtu")
enable_udp_perf = ctl.get_alias("enable_udp_perf")
@@ -211,6 +215,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -237,6 +242,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -270,6 +276,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -296,6 +303,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase2/3_vlans_over_team.py b/recipes/regression_tests/phase2/3_vlans_over_team.py
index 972f94f..f66a38d 100644
--- a/recipes/regression_tests/phase2/3_vlans_over_team.py
+++ b/recipes/regression_tests/phase2/3_vlans_over_team.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -40,6 +41,9 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment)
m1_team = m1.get_interface("test_if")
m1_team.set_mtu(mtu)
@@ -219,6 +223,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -243,6 +248,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
@@ -273,6 +279,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -297,6 +304,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
diff --git a/recipes/regression_tests/phase2/team_test.py b/recipes/regression_tests/phase2/team_test.py
index 7cc66dd..6327528 100644
--- a/recipes/regression_tests/phase2/team_test.py
+++ b/recipes/regression_tests/phase2/team_test.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -39,6 +40,9 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment)
test_if1 = m1.get_interface("test_if")
test_if1.set_mtu(mtu)
@@ -198,6 +202,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -222,6 +227,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -253,6 +259,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*5)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -277,6 +284,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*5)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -353,6 +361,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -377,6 +386,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -408,6 +418,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -432,6 +443,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py
index 5620d8a..a65c4ba 100644
--- a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py
+++ b/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -45,6 +46,9 @@ nperf_max_runs = int(ctl.get_alias("nperf_max_runs"))
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([h1, g1, g2, h2, g3, g4], pr_user_comment)
h1_nic1 = h1.get_interface("nic1")
h1_nic2 = h1.get_interface("nic2")
@@ -254,6 +258,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -280,6 +285,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -316,6 +322,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -342,6 +349,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py
index 1e1020e..dfc54e3 100644
--- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py
+++ b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -41,6 +42,9 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([h1, g1, h2], pr_user_comment)
h2_vlan10 = h2.get_interface("vlan10")
g1_vlan10 = g1.get_interface("vlan10")
@@ -199,6 +203,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -225,6 +230,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -258,6 +264,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -284,6 +291,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py
index 92549e2..154f0b9 100644
--- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py
+++ b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py
@@ -3,6 +3,7 @@ from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
# ------
# SETUP
@@ -41,6 +42,9 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+
+pr_comment = generate_perfrepo_comment([h1, g1, h2], pr_user_comment)
h2_vlan10 = h2.get_interface("vlan10")
g1_guestnic = g1.get_interface("guestnic")
@@ -198,6 +202,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -224,6 +229,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -258,6 +264,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -284,6 +291,7 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
--
2.4.3
8 years
[PATCH v2 2/4] TaskAPI: add get_configuration to HostAPI
by Jan Tluka
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
lnst/Controller/Task.py | 3 +++
1 file changed, 3 insertions(+)
diff --git a/lnst/Controller/Task.py b/lnst/Controller/Task.py
index 9f21ffc..54c9d3f 100644
--- a/lnst/Controller/Task.py
+++ b/lnst/Controller/Task.py
@@ -193,6 +193,9 @@ class HostAPI(object):
def get_id(self):
return self._m.get_id()
+ def get_configuration(self):
+ return self._m.get_configuration()
+
def config(self, option, value, persistent=False, netns=None):
"""
Configure an option in /sys or /proc on the host.
--
2.4.3
8 years
[PATCH v2 1/4] TaskAPI: add set_comment method to PerfRepoResult
object
by Jan Tluka
This patch adds set_comment() method to PerfRepoResultAPI so that user can
add a comment to a test execution.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
lnst/Controller/Task.py | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/lnst/Controller/Task.py b/lnst/Controller/Task.py
index 5462d44..9f21ffc 100644
--- a/lnst/Controller/Task.py
+++ b/lnst/Controller/Task.py
@@ -936,6 +936,10 @@ class PerfRepoResult(object):
def set_hash_ignore(self, hash_ignore):
self._hash_ignore = hash_ignore
+ def set_comment(self, comment):
+ if comment:
+ self._testExecution.set_comment(comment)
+
def get_hash_ignore(self):
return self._hash_ignore
--
2.4.3
8 years
[PATCH v2 01/18] Config: add get_section_values method
by Ondrej Lichtner
From: Ondrej Lichtner <olichtne(a)redhat.com>
This method returns all the option values in a specific section.
Previously we just had the get_section method which returns the internal
dict structure of the Config class which is not very usable in the
application.
Signed-off-by: Ondrej Lichtner <olichtne(a)redhat.com>
---
lnst/Common/Config.py | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/lnst/Common/Config.py b/lnst/Common/Config.py
index 97a82e4..de7ba4a 100644
--- a/lnst/Common/Config.py
+++ b/lnst/Common/Config.py
@@ -155,6 +155,16 @@ class Config():
raise ConfigError(msg)
return self._options[section]
+ def get_section_values(self, section):
+ if section not in self._options:
+ msg = 'Unknow section: %s' % section
+ raise ConfigError(msg)
+
+ res = {}
+ for opt_name, opt in self._options[section].items():
+ res[opt_name] = opt["value"]
+ return res
+
def get_option(self, section, option):
sect = self.get_section(section)
if option not in sect:
--
2.7.2
8 years
[PATCH] recipes: add alias to specify test execution comment in
regression tests
by Jan Tluka
User can now specify alias 'perfrepo_comment' for all regression tests.
The alias value will be used as the comment for all test executions saved
in Perfrepo.
E.g: lnst-ctl -A perfrepo_comment="kernel: 2.6.32-573.el6" run \
active_backup_bond.xml
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
recipes/regression_tests/phase1/3_vlans.py | 9 +++++++++
recipes/regression_tests/phase1/3_vlans_over_bond.py | 9 +++++++++
recipes/regression_tests/phase1/bonding_test.py | 9 +++++++++
recipes/regression_tests/phase1/simple_netperf.py | 9 +++++++++
.../phase1/virtual_bridge_2_vlans_over_bond.py | 9 +++++++++
.../phase1/virtual_bridge_vlan_in_guest.py | 9 +++++++++
.../phase1/virtual_bridge_vlan_in_host.py | 9 +++++++++
recipes/regression_tests/phase2/3_vlans_over_team.py | 9 +++++++++
recipes/regression_tests/phase2/team_test.py | 17 +++++++++++++++++
...irtual_ovs_bridge_2_vlans_over_active_backup_bond.py | 9 +++++++++
.../phase2/virtual_ovs_bridge_vlan_in_guest.py | 9 +++++++++
.../phase2/virtual_ovs_bridge_vlan_in_host.py | 9 +++++++++
12 files changed, 116 insertions(+)
diff --git a/recipes/regression_tests/phase1/3_vlans.py b/recipes/regression_tests/phase1/3_vlans.py
index 78eedcf..fa2fe09 100644
--- a/recipes/regression_tests/phase1/3_vlans.py
+++ b/recipes/regression_tests/phase1/3_vlans.py
@@ -41,6 +41,7 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
m1_phy1 = m1.get_interface("eth1")
m1_phy1.set_mtu(mtu)
@@ -220,6 +221,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -244,6 +247,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
@@ -274,6 +279,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -298,6 +305,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
diff --git a/recipes/regression_tests/phase1/3_vlans_over_bond.py b/recipes/regression_tests/phase1/3_vlans_over_bond.py
index ff83121..024fdaa 100644
--- a/recipes/regression_tests/phase1/3_vlans_over_bond.py
+++ b/recipes/regression_tests/phase1/3_vlans_over_bond.py
@@ -40,6 +40,7 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
m1_bond = m1.get_interface("test_bond")
m1_bond.set_mtu(mtu)
@@ -219,6 +220,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -243,6 +246,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
@@ -273,6 +278,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -297,6 +304,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
diff --git a/recipes/regression_tests/phase1/bonding_test.py b/recipes/regression_tests/phase1/bonding_test.py
index e125fb9..5760178 100644
--- a/recipes/regression_tests/phase1/bonding_test.py
+++ b/recipes/regression_tests/phase1/bonding_test.py
@@ -40,6 +40,7 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
test_if1 = m1.get_interface("test_if")
test_if1.set_mtu(mtu)
@@ -197,6 +198,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -219,6 +222,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -249,6 +254,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -271,6 +278,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase1/simple_netperf.py b/recipes/regression_tests/phase1/simple_netperf.py
index 407ee5d..75a0f64 100644
--- a/recipes/regression_tests/phase1/simple_netperf.py
+++ b/recipes/regression_tests/phase1/simple_netperf.py
@@ -39,6 +39,7 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
m1_testiface = m1.get_interface("testiface")
m2_testiface = m2.get_interface("testiface")
@@ -165,6 +166,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -187,6 +190,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
@@ -213,6 +218,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -235,6 +242,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
index 598ee7b..bb059bd 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
@@ -45,6 +45,7 @@ nperf_max_runs = int(ctl.get_alias("nperf_max_runs"))
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
mtu = ctl.get_alias("mtu")
enable_udp_perf = ctl.get_alias("enable_udp_perf")
@@ -275,6 +276,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
if enable_udp_perf is not None:
@@ -301,6 +304,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -337,6 +342,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -363,6 +370,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py
index d3e4790..b78da1a 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py
@@ -41,6 +41,7 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
mtu = ctl.get_alias("mtu")
enable_udp_perf = ctl.get_alias("enable_udp_perf")
@@ -211,6 +212,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -237,6 +240,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -270,6 +275,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -296,6 +303,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py
index 6070beb..6ba0080 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py
@@ -41,6 +41,7 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
mtu = ctl.get_alias("mtu")
enable_udp_perf = ctl.get_alias("enable_udp_perf")
@@ -211,6 +212,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -237,6 +240,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -270,6 +275,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -296,6 +303,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase2/3_vlans_over_team.py b/recipes/regression_tests/phase2/3_vlans_over_team.py
index 972f94f..5a1c98a 100644
--- a/recipes/regression_tests/phase2/3_vlans_over_team.py
+++ b/recipes/regression_tests/phase2/3_vlans_over_team.py
@@ -40,6 +40,7 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
m1_team = m1.get_interface("test_if")
m1_team.set_mtu(mtu)
@@ -219,6 +220,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -243,6 +246,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
@@ -273,6 +278,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -297,6 +304,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
srv_proc.intr()
diff --git a/recipes/regression_tests/phase2/team_test.py b/recipes/regression_tests/phase2/team_test.py
index 7cc66dd..44bac86 100644
--- a/recipes/regression_tests/phase2/team_test.py
+++ b/recipes/regression_tests/phase2/team_test.py
@@ -39,6 +39,7 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
test_if1 = m1.get_interface("test_if")
test_if1.set_mtu(mtu)
@@ -198,6 +199,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -222,6 +225,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -253,6 +258,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*5)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -277,6 +284,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*5)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -353,6 +362,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -377,6 +388,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -408,6 +421,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -432,6 +447,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py
index 5620d8a..771a4c0 100644
--- a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py
+++ b/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py
@@ -45,6 +45,7 @@ nperf_max_runs = int(ctl.get_alias("nperf_max_runs"))
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
h1_nic1 = h1.get_interface("nic1")
h1_nic2 = h1.get_interface("nic2")
@@ -254,6 +255,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -280,6 +283,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -316,6 +321,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -342,6 +349,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py
index 1e1020e..68851da 100644
--- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py
+++ b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py
@@ -41,6 +41,7 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
h2_vlan10 = h2.get_interface("vlan10")
g1_vlan10 = g1.get_interface("vlan10")
@@ -199,6 +200,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -225,6 +228,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -258,6 +263,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -284,6 +291,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py
index 92549e2..07569ff 100644
--- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py
+++ b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py
@@ -41,6 +41,7 @@ nperf_cpupin = ctl.get_alias("nperf_cpupin")
nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
nperf_mode = ctl.get_alias("nperf_mode")
nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+pr_comment = ctl.get_alias("perfrepo_comment")
h2_vlan10 = h2.get_interface("vlan10")
g1_guestnic = g1.get_interface("guestnic")
@@ -198,6 +199,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp
@@ -224,6 +227,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
@@ -258,6 +263,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_tcp, tcp_res_data)
+ if pr_comment != None:
+ result_tcp.set_comment(pr_comment)
perf_api.save_result(result_tcp)
# prepare PerfRepo result for udp ipv6
@@ -284,6 +291,8 @@ for setting in offload_settings:
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
netperf_result_template(result_udp, udp_res_data)
+ if pr_comment != None:
+ result_udp.set_comment(pr_comment)
perf_api.save_result(result_udp)
server_proc.intr()
--
2.4.3
8 years
PLEASE READ - PyRecipes final discussion with drafts - Opinions requested
by Jiri Prochazka
Hello,
this mail is status update on PyRecipes. I'm sorry there is no visible
progress yet, but it's because I was considering several different formats
and tried to satisfy both parties, which won't be probably possible.
Current use cases of LNST (RedHat vs Mellanox)
==============================================
Currently, we have both sides, each with their own use case scenarios. Red
Hat is using LNST for performance regression testing, along with PerfRepo.
Each test uses same tools (ethtool, netperf, ping). The setups are very
similar, as for hosts and eth ifaces, what is changing is soft interface
setup (bond, team, VLAN, ovs, bridge). Setup of soft interfaces is
currently done via XML and in task is only execution of test tools and
additional setup (ethtool, MTU). Currently, the code is duplicated in every
task (perfrepo methods, ethtool setups, mtu setups, netperf inits and
calls) so a new layer of abstraction would be welcomed in order to simplify
the code and maintainability.
As for Mellanox side, I can only speak about what I can see in switchdev
recipes. Their approach is to define only hardware interfaces in XML and do
all soft interface setting in the task. In order to create an abstraction
layer a TestLib was created, which looks really appealing to me.
What do RH/Mellanox expect from PyRecipes
=========================================
We had a video call in January about PyRecipes with olichtne (RH) and
jpirko (Mellanox). Both sides had different view of PyRecipes.
Red Hat POW
-----------
1. Wants to be able to define soft and hard interfaces together in setup
2. Wants to be able to combine network setup with different tasks
3. Task should be understood as a function, with network as argument
4. Task should be generic
Mellanox POW
------------
1. Wants to get rid of ID's (hosts and interfaces)
2. Wants soft iface definition only in task, not in setup
3. Task should be specific
4. Wrappers for generic stuff
5. 1 task == 1 test == 1 file, do not combine it
Proposed approach #1 (Mlx like)
===============================
Description: Setup and task is in one file, no IDs are used, soft interface
definition is part of task
Example:
import lnst
m1 = lnst.add_host()
m2 = lnst.add_host()
m1_eth1 = m1.add_interface(label="tnet")
m1_eth2 = m1.add_interface(label="tnet")
m2_eth1 = m2.add_interface(label="tnet")
while match(match=lnst.SingleMatch):
m1_team = m1.create_team([m1_eth1, m1_eth2], ip="1.2.3.4/24")
m2_eth1.reset(["1.2.3.5/24"])
ping_mod = ...
m1_team.run(ping_mod)
Proposed approach #2 (RH like)
==============================
Description: Soft interfaces can be defined in both setup() and task()
methods. IDs muset used due to different scopes of variables. Setup can be
in separate file and can be imported in multiple tasks. In one file,
multiple tasks can be called.
Example:
import lnst
def setup():
m1 = lnst.add_host("m1")
m2 = lnst.add_host("m2")
m1_eth1 = m1.add_interface(id="eth1", label="tnet")
m1_eth2 = m1.add_interface(id="eth2", label="tnet")
m2_eth1 = m2.add_interface(id="eth1", label="tnet", ip="1.1.1.1/24")
m1.create_team(id="team1", slaves=[m1_eth1, m1_eth2], ip="1.1.1.2/24")
def task():
m1 = lnst.get_host("m1")
m2 = lnst.get_host("m2")
m1_team = m1.get_interface("team1")
m2_eth1 = m2.get_interface("eth1")
ping_mod = ...
m1_team.run(ping_mod)
lnst.run(match=lnst.SingleMatch,
setup,
task)
Proposed approach #3 (RH like)
==============================
Description: Task method is portable, it uses machine and interface objects
as args so no IDs are required. Soft interfaces can be created in both
do_task and in setup phase. In one file, multiple tasks can be called.
Example:
import lnst
def do_task(m1, if1, if2):
ping_mod = ...src=if1, dst=if2...
m1.run(ping_mod)
m1 = lnst.add_machine()
m2 = lnst.add_machine()
m1_eth1 = m1.add_interface(label="tnet")
m1_eth2 = m1.add_interface(label="tnet")
m1_team = m1.create_team(slaves=[m1_eth1, m1_eth2], ip="1.1.1.1/24")
m2_eth1 = m2.add_interface(label="tnet", ip="1.1.1.2/24")
while lnst.match(match=lnst.SingleMatch):
do_task(m1, m1_team, m2_eth1)
Summary
=======
Drafts above are not meant to be final, it sure can be improved and
modified to satisfy our needs. But we need to come to conclusion for both
Mlx and RH side, so I can start working on it.
Some important questions regarding PyRecipes:
---------------------------------------------
I. Soft interfaces - in setup phase and task phase or only in task phase?
II. Portability - one task == one recipe, or allow combinations of networks
with different tasks?
III. Should task be generic or specific?
IV. Do we have to get rid of IDs?
My opinion on the matter
========================
Favourite approach - #3
Answers to questions:
---------------------
I. Soft interfaces - only in task phase - to follow 1 task == 1 recipe
mentality, will bring easier maintaining
II. Portability - one task == one recipe - even our use case shows, that
tests don't allow so much combination (1 task is being used in avg. 1-2
recipes in phase1, 2-3 recipes in phase2), so I don't thinks its so
important to use to preserve it
III. Specific task - generic stuff can be defined by a new layer of
abstraction, like TestLib in switchdev tests
IV. I do not think IDs are such evil that we should get rid of it, altough
I agree object oriented approach (which we want to follow, thus PyRecipes
became a thing in the first place) should be only using instances of
objects in both task and setup.
Summary
=======
Please, devs from RH and Mlx, take a look on these drafts and send an email
with your opinion on the matter. Ideally, next week starting Wed I would
like to have it decided so I can start implementing it.
In the end, probably we will have to compromise, but hopefully, both
parties will end up satisfied and all this work will lead to improving of
the quality of the whole LNST.
Thanks for reading,
Jiri Prochazka
8 years
[PATCH] TaskAPI: add set_comment method to PerfRepoResult object
by Jan Tluka
This patch adds set_comment() method to PerfRepoResultAPI so that user can
add a comment to a test execution.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
lnst/Controller/Task.py | 3 +++
1 file changed, 3 insertions(+)
diff --git a/lnst/Controller/Task.py b/lnst/Controller/Task.py
index 5462d44..dac8af0 100644
--- a/lnst/Controller/Task.py
+++ b/lnst/Controller/Task.py
@@ -936,6 +936,9 @@ class PerfRepoResult(object):
def set_hash_ignore(self, hash_ignore):
self._hash_ignore = hash_ignore
+ def set_comment(self, comment):
+ self._testExecution.set_comment(comment)
+
def get_hash_ignore(self):
return self._hash_ignore
--
2.4.3
8 years