[PATCH 0/3] Change token attr to read buffer
by Ramesh Thomas
This change matches the driver change that changes bandwidth token sysfs
attribute to read buffers. Creates new API for read buffers and
deprecates existing ones for bw tokens. Generate warnings if
configuration file contains token attributes. Config commands leave
short option unchanged while renames long options to indicate read
buffers. Internal implementation is changed to use read buffers and
would fall back to bw token attributes if read buffer attribute is not
found due to use of older drivers.
Ramesh Thomas (3):
accel-config: Add support for new attributes related to read buffers
accel-config: Add deprecated attribute and warnings
accel-config: Update manpages to use "read buffer" instead of "token"
.../accfg/accel-config-config-device.txt | 10 +--
.../accfg/accel-config-config-group.txt | 26 +++---
Documentation/accfg/accel-config-list.txt | 28 +++---
accfg/config.c | 70 ++++++++++-----
accfg/config_attr.c | 66 +++++++-------
accfg/lib/libaccel-config.sym | 13 +++
accfg/lib/libaccfg.c | 90 ++++++++++++++-----
accfg/lib/private.h | 10 +--
accfg/libaccel_config.h | 48 +++++++---
accfg/list.c | 12 +--
test/libaccfg.c | 52 +++++------
util/json.c | 8 +-
12 files changed, 268 insertions(+), 165 deletions(-)
--
2.31.1
5 months, 1 week
[PATCH 0/4] Add "enable" and "forced" options to load-config
by Ramesh Thomas
Add new convenience options to load-config commmand as follows.
-e, enable: Enable configured devices and wqs
-f, forced: Disable devices being configured if enabled
Fix some issues in code that skips configuring device attributes if the
device needs to be skipped because it was enabled.
Ramesh Thomas (4):
accel-config: Add "enable" option to load-config command
accel-config: Fix issues in skipping active configurations
accel-config: Add "forced" option to load-config command
accel-config: Update load-config documentation
.../accfg/accel-config-load-config.txt | 8 +
accfg/config.c | 155 +++++++++++++-----
2 files changed, 125 insertions(+), 38 deletions(-)
--
2.26.3
5 months, 2 weeks
[PATCH v1 1/1] accel-config/test: set tokens_allowed 0 in profiles for better bandwidth
by Tony Zhu
According test tokens_allowed = 8, BW is limited to ~8 GBPS per WQ. Set
0 to remove the limitation.
Signed-off-by: Tony Zhu <tony.zhu(a)intel.com>
---
contrib/configs/app_profile.conf | 4 ++--
contrib/configs/os_profile.conf | 2 +-
contrib/configs/profilenote.txt | 4 ++++
contrib/configs/storage_profile.conf | 4 ++--
4 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/contrib/configs/app_profile.conf b/contrib/configs/app_profile.conf
index a6a3b12..3aa16ac 100644
--- a/contrib/configs/app_profile.conf
+++ b/contrib/configs/app_profile.conf
@@ -7,7 +7,7 @@
"dev":"group0.0",
"tokens_reserved":0,
"use_token_limit":0,
- "tokens_allowed":8,
+ "tokens_allowed":0,
"grouped_workqueues":[
{
"dev":"wq0.0",
@@ -34,7 +34,7 @@
"dev":"group0.1",
"tokens_reserved":0,
"use_token_limit":0,
- "tokens_allowed":8,
+ "tokens_allowed":0,
"grouped_workqueues":[
{
"dev":"wq0.1",
diff --git a/contrib/configs/os_profile.conf b/contrib/configs/os_profile.conf
index 1ee721f..077ce58 100644
--- a/contrib/configs/os_profile.conf
+++ b/contrib/configs/os_profile.conf
@@ -7,7 +7,7 @@
"dev":"group0.0",
"tokens_reserved":0,
"use_token_limit":0,
- "tokens_allowed":8,
+ "tokens_allowed":0,
"grouped_workqueues":[
{
"dev":"wq0.0",
diff --git a/contrib/configs/profilenote.txt b/contrib/configs/profilenote.txt
index 6debc9a..1e3d1c7 100644
--- a/contrib/configs/profilenote.txt
+++ b/contrib/configs/profilenote.txt
@@ -9,6 +9,7 @@ profile descriptions.
- max xfer size=2MB, max batch sz=32, bof=0
- priority 10
- threshold 15
+ - tokens_allowed 0
2. app_profile.conf
Recommended dsa default application profile.
@@ -17,6 +18,7 @@ profile descriptions.
max xfer size=16KB, max batch sz=32, bof=0, threshold=6
- Grp2 - SWQ2@32qd, 1eng
max xfer size=2MB, max batch sz=32, bof=0, threshold=28
+ - tokens_allowed 0
- Use group1 for smaller size ops with better latency characteristics
3. net_profile.conf
@@ -24,6 +26,7 @@ profile descriptions.
- 4 groups, each with 1 DWQ@32qd/1 eng per group
- Each DWQ has max xfer size=16KB
- DWQ threshold default 0
+ - tokens_allowed 0
4. storage_profile.conf
Recommended dsa storage accel profile
@@ -34,3 +37,4 @@ profile descriptions.
for throughput ops
- targeting SPDK-like uses
- DWQ threshold default 0
+ - tokens_allowed 0
diff --git a/contrib/configs/storage_profile.conf b/contrib/configs/storage_profile.conf
index f301b4c..13f58c5 100644
--- a/contrib/configs/storage_profile.conf
+++ b/contrib/configs/storage_profile.conf
@@ -7,7 +7,7 @@
"dev":"group0.0",
"tokens_reserved":0,
"use_token_limit":0,
- "tokens_allowed":8,
+ "tokens_allowed":0,
"grouped_workqueues":[
{
"dev":"wq0.0",
@@ -34,7 +34,7 @@
"dev":"group0.1",
"tokens_reserved":0,
"use_token_limit":0,
- "tokens_allowed":8,
+ "tokens_allowed":0,
"grouped_workqueues":[
{
"dev":"wq0.1",
--
2.27.0
5 months, 2 weeks
[PATCH v3 5/5] accel-config/test: add -n to input the submit descriptor number
by Tony Zhu
It is necessary to input the submit descriptor number for tests.
A large number of submissions may trigger abnormal issues.
Signed-off-by: Tony Zhu <tony.zhu(a)intel.com>
---
Changelog:
V2:
- Modify as int num_itr in alloc_batch_task. I checked the code it is
int num_itr, but the sent out patch is unsigned long. The last change
after test was not syncned back.
V3:
- Allocate of struct not the size of pointer. Run 10 tests, the magic
is pass whatever there is fix or not. But it is real issue, the stress
test may trigger the issue.
test/dsa.c | 375 ++++++++++++++++++++++++++++++++-------------
test/dsa.h | 56 ++++---
test/dsa_test.c | 392 +++++++++++++++++++++++++++++-------------------
test/prep.c | 30 ++--
4 files changed, 556 insertions(+), 297 deletions(-)
diff --git a/test/dsa.c b/test/dsa.c
index 1191254..29aee41 100644
--- a/test/dsa.c
+++ b/test/dsa.c
@@ -215,6 +215,7 @@ int dsa_alloc(struct dsa_context *ctx, int shared, int dev_id, int wq_id)
dev = accfg_wq_get_device(ctx->wq);
ctx->dedicated = accfg_wq_get_mode(ctx->wq);
ctx->wq_size = accfg_wq_get_size(ctx->wq);
+ ctx->threshold = accfg_wq_get_threshold(ctx->wq);
ctx->wq_idx = accfg_wq_get_id(ctx->wq);
ctx->bof = accfg_wq_get_block_on_fault(ctx->wq);
ctx->wq_max_batch_size = accfg_wq_get_max_batch_size(ctx->wq);
@@ -232,15 +233,23 @@ int dsa_alloc(struct dsa_context *ctx, int shared, int dev_id, int wq_id)
return 0;
}
-int alloc_task(struct dsa_context *ctx)
+int alloc_multiple_tasks(struct dsa_context *ctx, int num_itr)
{
- ctx->single_task = __alloc_task();
- if (!ctx->single_task)
- return -ENOMEM;
+ struct task_node *tmp_tsk_node;
+ int cnt = 0;
- dbg("single task allocated, desc %#lx comp %#lx\n",
- ctx->single_task->desc, ctx->single_task->comp);
+ while (cnt < num_itr) {
+ tmp_tsk_node = ctx->multi_task_node;
+ ctx->multi_task_node = (struct task_node *)malloc(sizeof(struct task_node));
+ if (!ctx->multi_task_node)
+ return -ENOMEM;
+ ctx->multi_task_node->tsk = __alloc_task();
+ if (!ctx->multi_task_node->tsk)
+ return -ENOMEM;
+ ctx->multi_task_node->next = tmp_tsk_node;
+ cnt++;
+ }
return DSA_STATUS_OK;
}
@@ -333,48 +342,60 @@ int init_task(struct task *tsk, int tflags, int opcode,
return DSA_STATUS_OK;
}
-int alloc_batch_task(struct dsa_context *ctx, unsigned int task_num)
+int alloc_batch_task(struct dsa_context *ctx, unsigned int task_num, int num_itr)
{
+ struct btask_node *btsk_node;
struct batch_task *btsk;
+ int cnt = 0;
if (!ctx->is_batch) {
err("%s is valid only if 'is_batch' is enabled", __func__);
return -EINVAL;
}
- ctx->batch_task = malloc(sizeof(struct batch_task));
- if (!ctx->batch_task)
- return -ENOMEM;
- memset(ctx->batch_task, 0, sizeof(struct batch_task));
+ while (cnt < num_itr) {
+ btsk_node = ctx->multi_btask_node;
- btsk = ctx->batch_task;
+ ctx->multi_btask_node = (struct btask_node *)
+ malloc(sizeof(struct btask_node));
+ if (!ctx->multi_btask_node)
+ return -ENOMEM;
- btsk->core_task = __alloc_task();
- if (!btsk->core_task)
- return -ENOMEM;
+ ctx->multi_btask_node->btsk = malloc(sizeof(struct batch_task));
+ if (!ctx->multi_btask_node->btsk)
+ return -ENOMEM;
+ memset(ctx->multi_btask_node->btsk, 0, sizeof(struct batch_task));
+
+ btsk = ctx->multi_btask_node->btsk;
- btsk->sub_tasks = malloc(task_num * sizeof(struct task));
- if (!btsk->sub_tasks)
- return -ENOMEM;
- memset(btsk->sub_tasks, 0, task_num * sizeof(struct task));
+ btsk->core_task = __alloc_task();
+ if (!btsk->core_task)
+ return -ENOMEM;
- btsk->sub_descs = aligned_alloc(64,
- task_num * sizeof(struct dsa_hw_desc));
- if (!btsk->sub_descs)
- return -ENOMEM;
- memset(btsk->sub_descs, 0, task_num * sizeof(struct dsa_hw_desc));
+ btsk->sub_tasks = malloc(task_num * sizeof(struct task));
+ if (!btsk->sub_tasks)
+ return -ENOMEM;
+ memset(btsk->sub_tasks, 0, task_num * sizeof(struct task));
- btsk->sub_comps = aligned_alloc(32,
- task_num * sizeof(struct dsa_completion_record));
- if (!btsk->sub_comps)
- return -ENOMEM;
- memset(btsk->sub_comps, 0,
- task_num * sizeof(struct dsa_completion_record));
+ btsk->sub_descs = aligned_alloc(64, task_num * sizeof(struct dsa_hw_desc));
+ if (!btsk->sub_descs)
+ return -ENOMEM;
+ memset(btsk->sub_descs, 0, task_num * sizeof(struct dsa_hw_desc));
- dbg("batch task allocated %#lx, ctask %#lx, sub_tasks %#lx\n",
- btsk, btsk->core_task, btsk->sub_tasks);
- dbg("sub_descs %#lx, sub_comps %#lx\n",
- btsk->sub_descs, btsk->sub_comps);
+ btsk->sub_comps =
+ aligned_alloc(32, task_num * sizeof(struct dsa_completion_record));
+ if (!btsk->sub_comps)
+ return -ENOMEM;
+ memset(btsk->sub_comps, 0,
+ task_num * sizeof(struct dsa_completion_record));
+
+ dbg("batch task allocated %#lx, ctask %#lx, sub_tasks %#lx\n",
+ btsk, btsk->core_task, btsk->sub_tasks);
+ dbg("sub_descs %#lx, sub_comps %#lx\n",
+ btsk->sub_descs, btsk->sub_comps);
+ ctx->multi_btask_node->next = btsk_node;
+ cnt++;
+ }
return DSA_STATUS_OK;
}
@@ -541,10 +562,31 @@ void dsa_free(struct dsa_context *ctx)
void dsa_free_task(struct dsa_context *ctx)
{
- if (!ctx->is_batch)
- free_task(ctx->single_task);
- else
- free_batch_task(ctx->batch_task);
+ if (!ctx->is_batch) {
+ struct task_node *tsk_node = NULL, *tmp_node = NULL;
+
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ tmp_node = tsk_node->next;
+ free_task(tsk_node->tsk);
+ tsk_node->tsk = NULL;
+ free(tsk_node);
+ tsk_node = tmp_node;
+ }
+ ctx->multi_task_node = NULL;
+ } else {
+ struct btask_node *tsk_node = NULL, *tmp_node = NULL;
+
+ tsk_node = ctx->multi_btask_node;
+ while (tsk_node) {
+ tmp_node = tsk_node->next;
+ free_batch_task(tsk_node->btsk);
+ tsk_node->btsk = NULL;
+ free(tsk_node);
+ tsk_node = tmp_node;
+ }
+ ctx->multi_task_node = NULL;
+ }
}
void free_task(struct task *tsk)
@@ -594,9 +636,9 @@ void free_batch_task(struct batch_task *btsk)
free(btsk);
}
-int dsa_wait_noop(struct dsa_context *ctx)
+int dsa_wait_noop(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
rc = dsa_wait_on_desc_timeout(comp, ms_timeout);
@@ -608,25 +650,40 @@ int dsa_wait_noop(struct dsa_context *ctx)
return DSA_STATUS_OK;
}
-int dsa_noop(struct dsa_context *ctx)
+int dsa_noop_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- dsa_prep_noop(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_noop(ctx);
+ dsa_prep_noop(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
+ tsk_node = ctx->multi_task_node;
+ info("Submitted all noop jobs\n");
+
+ while (tsk_node) {
+ ret = dsa_wait_noop(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
-int dsa_wait_batch(struct dsa_context *ctx)
+int dsa_wait_batch(struct batch_task *btsk)
{
int rc;
- struct batch_task *btsk = ctx->batch_task;
struct task *ctsk = btsk->core_task;
info("wait batch\n");
@@ -641,10 +698,10 @@ int dsa_wait_batch(struct dsa_context *ctx)
return DSA_STATUS_OK;
}
-int dsa_wait_memcpy(struct dsa_context *ctx)
+int dsa_wait_memcpy(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_hw_desc *desc = ctx->single_task->desc;
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_hw_desc *desc = tsk->desc;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
again:
@@ -657,33 +714,50 @@ again:
/* re-submit if PAGE_FAULT reported by HW && BOF is off */
if (stat_val(comp->status) == DSA_COMP_PAGE_FAULT_NOBOF &&
!(desc->flags & IDXD_OP_FLAG_BOF)) {
- dsa_reprep_memcpy(ctx);
+ dsa_reprep_memcpy(ctx, tsk);
goto again;
}
return DSA_STATUS_OK;
}
-int dsa_memcpy(struct dsa_context *ctx)
+int dsa_memcpy_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
- tsk->dflags |= IDXD_OP_FLAG_BOF;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tsk_node->tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
+ tsk_node->tsk->dflags |= IDXD_OP_FLAG_BOF;
- dsa_prep_memcpy(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_memcpy(ctx);
+ dsa_prep_memcpy(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
+
+ info("Submitted all memcpy jobs\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
+
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ret = dsa_wait_memcpy(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
-int dsa_wait_memfill(struct dsa_context *ctx)
+int dsa_wait_memfill(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_hw_desc *desc = ctx->single_task->desc;
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_hw_desc *desc = tsk->desc;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
again:
@@ -697,33 +771,50 @@ again:
/* re-submit if PAGE_FAULT reported by HW && BOF is off */
if (stat_val(comp->status) == DSA_COMP_PAGE_FAULT_NOBOF &&
!(desc->flags & IDXD_OP_FLAG_BOF)) {
- dsa_reprep_memfill(ctx);
+ dsa_reprep_memfill(ctx, tsk);
goto again;
}
return DSA_STATUS_OK;
}
-int dsa_memfill(struct dsa_context *ctx)
+int dsa_memfill_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
- tsk->dflags |= IDXD_OP_FLAG_BOF;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tsk_node->tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
+ tsk_node->tsk->dflags |= IDXD_OP_FLAG_BOF;
+
+ dsa_prep_memfill(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
+
+ info("Submitted all memcpy jobs\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
- dsa_prep_memfill(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_memfill(ctx);
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ret = dsa_wait_memfill(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
-int dsa_wait_compare(struct dsa_context *ctx)
+int dsa_wait_compare(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_hw_desc *desc = ctx->single_task->desc;
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_hw_desc *desc = tsk->desc;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
again:
@@ -737,33 +828,50 @@ again:
/* re-submit if PAGE_FAULT reported by HW && BOF is off */
if (stat_val(comp->status) == DSA_COMP_PAGE_FAULT_NOBOF &&
!(desc->flags & IDXD_OP_FLAG_BOF)) {
- dsa_reprep_compare(ctx);
+ dsa_reprep_compare(ctx, tsk);
goto again;
}
return DSA_STATUS_OK;
}
-int dsa_compare(struct dsa_context *ctx)
+int dsa_compare_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
- tsk->dflags |= IDXD_OP_FLAG_BOF;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tsk_node->tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
+ tsk_node->tsk->dflags |= IDXD_OP_FLAG_BOF;
+
+ dsa_prep_compare(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
+
+ info("Submitted all memcpy jobs\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
- dsa_prep_compare(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_compare(ctx);
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ret = dsa_wait_compare(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
-int dsa_wait_compval(struct dsa_context *ctx)
+int dsa_wait_compval(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_hw_desc *desc = ctx->single_task->desc;
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_hw_desc *desc = tsk->desc;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
again:
@@ -777,33 +885,50 @@ again:
/* re-submit if PAGE_FAULT reported by HW && BOF is off */
if (stat_val(comp->status) == DSA_COMP_PAGE_FAULT_NOBOF &&
!(desc->flags & IDXD_OP_FLAG_BOF)) {
- dsa_reprep_compval(ctx);
+ dsa_reprep_compval(ctx, tsk);
goto again;
}
return DSA_STATUS_OK;
}
-int dsa_compval(struct dsa_context *ctx)
+int dsa_compval_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
- tsk->dflags |= IDXD_OP_FLAG_BOF;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tsk_node->tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
+ tsk_node->tsk->dflags |= IDXD_OP_FLAG_BOF;
+
+ dsa_prep_compval(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
- dsa_prep_compval(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_compval(ctx);
+ info("Submitted all memcpy jobs\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
+
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ret = dsa_wait_compval(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
-int dsa_wait_dualcast(struct dsa_context *ctx)
+int dsa_wait_dualcast(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_hw_desc *desc = ctx->single_task->desc;
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_hw_desc *desc = tsk->desc;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
again:
@@ -816,25 +941,42 @@ again:
/* re-submit if PAGE_FAULT reported by HW && BOF is off */
if (stat_val(comp->status) == DSA_COMP_PAGE_FAULT_NOBOF &&
!(desc->flags & IDXD_OP_FLAG_BOF)) {
- dsa_reprep_dualcast(ctx);
+ dsa_reprep_dualcast(ctx, tsk);
goto again;
}
return DSA_STATUS_OK;
}
-int dsa_dualcast(struct dsa_context *ctx)
+int dsa_dualcast_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
- tsk->dflags |= IDXD_OP_FLAG_BOF;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tsk_node->tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
+ tsk_node->tsk->dflags |= IDXD_OP_FLAG_BOF;
- dsa_prep_dualcast(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_dualcast(ctx);
+ dsa_prep_dualcast(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
+
+ info("Submitted all memcpy jobs\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
+
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ret = dsa_wait_dualcast(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
@@ -872,6 +1014,23 @@ int task_result_verify(struct task *tsk, int mismatch_expected)
return DSA_STATUS_OK;
}
+int task_result_verify_task_nodes(struct dsa_context *ctx, int mismatch_expected)
+{
+ struct task_node *tsk_node = ctx->multi_task_node;
+ int ret = DSA_STATUS_OK;
+
+ while (tsk_node) {
+ ret = task_result_verify(tsk_node->tsk, mismatch_expected);
+ if (ret != DSA_STATUS_OK) {
+ err("memory result verify failed %d\n", ret);
+ return ret;
+ }
+ tsk_node = tsk_node->next;
+ }
+
+ return ret;
+}
+
int task_result_verify_memcpy(struct task *tsk, int mismatch_expected)
{
int rc;
diff --git a/test/dsa.h b/test/dsa.h
index df2fc09..382fc66 100644
--- a/test/dsa.h
+++ b/test/dsa.h
@@ -62,6 +62,11 @@ struct task {
int test_flags;
};
+struct task_node {
+ struct task *tsk;
+ struct task_node *next;
+};
+
/* metadata for batch DSA task */
struct batch_task {
struct task *core_task; /* core task with batch opcode 0x1*/
@@ -72,6 +77,11 @@ struct batch_task {
int test_flags;
};
+struct btask_node {
+ struct batch_task *btsk;
+ struct btask_node *next;
+};
+
struct dsa_context {
struct accfg_ctx *ctx;
struct accfg_wq *wq;
@@ -84,6 +94,7 @@ struct dsa_context {
int wq_idx;
void *wq_reg;
int wq_size;
+ int threshold;
int dedicated;
int bof;
unsigned int wq_max_batch_size;
@@ -92,8 +103,8 @@ struct dsa_context {
int is_batch;
union {
- struct task *single_task;
- struct batch_task *batch_task;
+ struct task_node *multi_task_node;
+ struct btask_node *multi_btask_node;
};
};
@@ -214,42 +225,43 @@ int dsa_enqcmd(struct dsa_context *ctx, struct dsa_hw_desc *hw);
struct dsa_context *dsa_init(void);
int dsa_alloc(struct dsa_context *ctx, int shared, int dev_id, int wq_id);
-int alloc_task(struct dsa_context *ctx);
+int alloc_multiple_tasks(struct dsa_context *ctx, int num_itr);
struct task *__alloc_task(void);
int init_task(struct task *tsk, int tflags, int opcode,
unsigned long xfer_size);
-int dsa_noop(struct dsa_context *ctx);
-int dsa_wait_noop(struct dsa_context *ctx);
+int dsa_noop_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_noop(struct dsa_context *ctx, struct task *tsk);
-int dsa_memcpy(struct dsa_context *ctx);
-int dsa_wait_memcpy(struct dsa_context *ctx);
+int dsa_memcpy_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_memcpy(struct dsa_context *ctx, struct task *tsk);
-int dsa_memfill(struct dsa_context *ctx);
-int dsa_wait_memfill(struct dsa_context *ctx);
+int dsa_memfill_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_memfill(struct dsa_context *ctx, struct task *tsk);
-int dsa_compare(struct dsa_context *ctx);
-int dsa_wait_compare(struct dsa_context *ctx);
+int dsa_compare_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_compare(struct dsa_context *ctx, struct task *tsk);
-int dsa_compval(struct dsa_context *ctx);
-int dsa_wait_compval(struct dsa_context *ctx);
+int dsa_compval_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_compval(struct dsa_context *ctx, struct task *tsk);
-int dsa_dualcast(struct dsa_context *ctx);
-int dsa_wait_dualcast(struct dsa_context *ctx);
+int dsa_dualcast_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_dualcast(struct dsa_context *ctx, struct task *tsk);
void dsa_prep_noop(struct task *tsk);
void dsa_prep_memcpy(struct task *tsk);
-void dsa_reprep_memcpy(struct dsa_context *ctx);
+void dsa_reprep_memcpy(struct dsa_context *ctx, struct task *tsk);
void dsa_prep_memfill(struct task *tsk);
-void dsa_reprep_memfill(struct dsa_context *ctx);
+void dsa_reprep_memfill(struct dsa_context *ctx, struct task *tsk);
void dsa_prep_compare(struct task *tsk);
-void dsa_reprep_compare(struct dsa_context *ctx);
+void dsa_reprep_compare(struct dsa_context *ctx, struct task *tsk);
void dsa_prep_compval(struct task *tsk);
-void dsa_reprep_compval(struct dsa_context *ctx);
+void dsa_reprep_compval(struct dsa_context *ctx, struct task *tsk);
void dsa_prep_dualcast(struct task *tsk);
-void dsa_reprep_dualcast(struct dsa_context *ctx);
+void dsa_reprep_dualcast(struct dsa_context *ctx, struct task *tsk);
int task_result_verify(struct task *tsk, int mismatch_expected);
+int task_result_verify_task_nodes(struct dsa_context *ctx, int mismatch_expected);
int task_result_verify_memcpy(struct task *tsk, int mismatch_expected);
int task_result_verify_memfill(struct task *tsk, int mismatch_expected);
int task_result_verify_compare(struct task *tsk, int mismatch_expected);
@@ -257,7 +269,7 @@ int task_result_verify_compval(struct task *tsk, int mismatch_expected);
int task_result_verify_dualcast(struct task *tsk, int mismatch_expected);
int batch_result_verify(struct batch_task *btsk, int bof);
-int alloc_batch_task(struct dsa_context *ctx, unsigned int task_num);
+int alloc_batch_task(struct dsa_context *ctx, unsigned int task_num, int num_itr);
int init_batch_task(struct batch_task *btsk, int task_num, int tflags,
int opcode, unsigned long xfer_size, unsigned long dflags);
@@ -268,7 +280,7 @@ void dsa_prep_batch_memfill(struct batch_task *btsk);
void dsa_prep_batch_compare(struct batch_task *btsk);
void dsa_prep_batch_compval(struct batch_task *btsk);
void dsa_prep_batch_dualcast(struct batch_task *btsk);
-int dsa_wait_batch(struct dsa_context *ctx);
+int dsa_wait_batch(struct batch_task *btsk);
void dsa_free(struct dsa_context *ctx);
void dsa_free_task(struct dsa_context *ctx);
diff --git a/test/dsa_test.c b/test/dsa_test.c
index cd3bd50..eccf89c 100644
--- a/test/dsa_test.c
+++ b/test/dsa_test.c
@@ -24,19 +24,22 @@ static void usage(void)
"-b <opcode> ; if batch opcode, opcode in the batch\n"
"-c <batch_size> ; if batch opcode, number of descriptors for batch\n"
"-d ; wq device such as dsa0/wq0.0\n"
+ "-n <number of descriptors> ;descriptor count to submit\n"
"-t <ms timeout> ; ms to wait for descs to complete\n"
"-v ; verbose\n"
"-h ; print this message\n");
}
static int test_batch(struct dsa_context *ctx, size_t buf_size,
- int tflags, uint32_t bopcode, unsigned int bsize)
+ int tflags, uint32_t bopcode, unsigned int bsize, int num_desc)
{
+ struct btask_node *btsk_node;
unsigned long dflags;
- int rc = 0;
+ int rc = DSA_STATUS_OK;
+ int itr = num_desc, i = 0, range = 0;
- info("batch: len %#lx tflags %#x bopcode %#x batch_no %d\n",
- buf_size, tflags, bopcode, bsize);
+ info("batch: len %#lx tflags %#x bopcode %#x batch_no %d num_desc %ld\n",
+ buf_size, tflags, bopcode, bsize, num_desc);
if (bopcode == DSA_OPCODE_BATCH) {
err("Can't have batch op inside batch op\n");
@@ -45,198 +48,279 @@ static int test_batch(struct dsa_context *ctx, size_t buf_size,
ctx->is_batch = 1;
- rc = alloc_batch_task(ctx, bsize);
- if (rc != DSA_STATUS_OK)
- return rc;
+ if (ctx->dedicated == ACCFG_WQ_SHARED)
+ range = ctx->threshold;
+ else
+ range = ctx->wq_size - 1;
- dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tflags & TEST_FLAGS_BOF) && ctx->bof)
- dflags |= IDXD_OP_FLAG_BOF;
+ while (itr > 0 && rc == DSA_STATUS_OK) {
+ i = (itr < range) ? itr : range;
+ rc = alloc_batch_task(ctx, bsize, i);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = init_batch_task(ctx->batch_task, bsize, tflags, bopcode,
- buf_size, dflags);
- if (rc != DSA_STATUS_OK)
- return rc;
+ dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tflags & TEST_FLAGS_BOF) && ctx->bof)
+ dflags |= IDXD_OP_FLAG_BOF;
+
+ /* allocate memory to src and dest buffers and fill in the desc for all the nodes*/
+ btsk_node = ctx->multi_btask_node;
+ while (btsk_node) {
+ rc = init_batch_task(btsk_node->btsk, bsize, tflags, bopcode,
+ buf_size, dflags);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ switch (bopcode) {
+ case DSA_OPCODE_NOOP:
+ dsa_prep_batch_noop(btsk_node->btsk);
+ break;
+
+ case DSA_OPCODE_MEMMOVE:
+ dsa_prep_batch_memcpy(btsk_node->btsk);
+ break;
+
+ case DSA_OPCODE_MEMFILL:
+ dsa_prep_batch_memfill(btsk_node->btsk);
+ break;
+
+ case DSA_OPCODE_COMPARE:
+ dsa_prep_batch_compare(btsk_node->btsk);
+ break;
+ case DSA_OPCODE_COMPVAL:
+ dsa_prep_batch_compval(btsk_node->btsk);
+ break;
+
+ case DSA_OPCODE_DUALCAST:
+ dsa_prep_batch_dualcast(btsk_node->btsk);
+ break;
+ default:
+ err("Unsupported op %#x\n", bopcode);
+ return -EINVAL;
+ }
- switch (bopcode) {
- case DSA_OPCODE_NOOP:
- dsa_prep_batch_noop(ctx->batch_task);
- break;
- case DSA_OPCODE_MEMMOVE:
- dsa_prep_batch_memcpy(ctx->batch_task);
- break;
+ btsk_node = btsk_node->next;
+ }
- case DSA_OPCODE_MEMFILL:
- dsa_prep_batch_memfill(ctx->batch_task);
- break;
+ btsk_node = ctx->multi_btask_node;
+ while (btsk_node) {
+ dsa_prep_batch(btsk_node->btsk, dflags);
+ dump_sub_desc(btsk_node->btsk);
+ btsk_node = btsk_node->next;
+ }
- case DSA_OPCODE_COMPARE:
- dsa_prep_batch_compare(ctx->batch_task);
- break;
+ btsk_node = ctx->multi_btask_node;
+ while (btsk_node) {
+ dsa_desc_submit(ctx, btsk_node->btsk->core_task->desc);
+ btsk_node = btsk_node->next;
+ }
- case DSA_OPCODE_COMPVAL:
- dsa_prep_batch_compval(ctx->batch_task);
- break;
- case DSA_OPCODE_DUALCAST:
- dsa_prep_batch_dualcast(ctx->batch_task);
- break;
- default:
- err("Unsupported op %#x\n", bopcode);
- return -EINVAL;
- }
+ btsk_node = ctx->multi_btask_node;
+ while (btsk_node) {
+ rc = dsa_wait_batch(btsk_node->btsk);
+ if (rc != DSA_STATUS_OK)
+ err("batch failed stat %d\n", rc);
+ btsk_node = btsk_node->next;
+ }
- dsa_prep_batch(ctx->batch_task, dflags);
- dump_sub_desc(ctx->batch_task);
- dsa_desc_submit(ctx, ctx->batch_task->core_task->desc);
+ btsk_node = ctx->multi_btask_node;
+ while (btsk_node) {
+ rc = batch_result_verify(btsk_node->btsk, dflags & IDXD_OP_FLAG_BOF);
+ btsk_node = btsk_node->next;
+ }
- rc = dsa_wait_batch(ctx);
- if (rc != DSA_STATUS_OK) {
- err("batch failed stat %d\n", rc);
- rc = -ENXIO;
+ dsa_free_task(ctx);
+ itr = itr - range;
}
- rc = batch_result_verify(ctx->batch_task, dflags & IDXD_OP_FLAG_BOF);
-
return rc;
}
-static int test_noop(struct dsa_context *ctx, int tflags)
+static int test_noop(struct dsa_context *ctx, int tflags, int num_desc)
{
- struct task *tsk;
- int rc;
+ struct task_node *tsk_node;
+ int rc = DSA_STATUS_OK;
+ int itr = num_desc, i = 0, range = 0;
- info("noop: tflags %#x\n", tflags);
+ info("testnoop: tflags %#x num_desc %ld\n", tflags, num_desc);
ctx->is_batch = 0;
- rc = alloc_task(ctx);
- if (rc != DSA_STATUS_OK) {
- err("noop: alloc task failed, rc=%d\n", rc);
- return rc;
- }
+ if (ctx->dedicated == ACCFG_WQ_SHARED)
+ range = ctx->threshold;
+ else
+ range = ctx->wq_size - 1;
- tsk = ctx->single_task;
+ while (itr > 0 && rc == DSA_STATUS_OK) {
+ i = (itr < range) ? itr : range;
+ /* Allocate memory to all the task nodes, desc, completion record*/
+ rc = alloc_multiple_tasks(ctx, i);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = dsa_noop(ctx);
- if (rc != DSA_STATUS_OK) {
- err("noop failed stat %d\n", rc);
- return rc;
- }
+ /* allocate memory to src and dest buffers and fill in the desc for all the nodes*/
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ tsk_node->tsk->opcode = DSA_OPCODE_NOOP;
+ tsk_node->tsk->test_flags = tflags;
+ tsk_node = tsk_node->next;
+ }
+
+ rc = dsa_noop_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
+ /* Verification of all the nodes*/
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ rc = task_result_verify(tsk_node->tsk, 0);
+ tsk_node = tsk_node->next;
+ }
+
+ dsa_free_task(ctx);
+ itr = itr - range;
+ }
return rc;
}
static int test_memory(struct dsa_context *ctx, size_t buf_size,
- int tflags, uint32_t opcode)
+ int tflags, uint32_t opcode, int num_desc)
{
- struct task *tsk;
- int rc;
+ struct task_node *tsk_node;
+ int rc = DSA_STATUS_OK;
+ int itr = num_desc, i = 0, range = 0;
- info("mem: len %#lx tflags %#x opcode %d\n", buf_size, tflags, opcode);
+ info("testmemory: opcode %d len %#lx tflags %#x num_desc %ld\n",
+ opcode, buf_size, tflags, num_desc);
ctx->is_batch = 0;
- rc = alloc_task(ctx);
- if (rc != DSA_STATUS_OK) {
- err("mem: alloc task failed opcode %d, rc=%d\n", opcode, rc);
- return rc;
- }
-
- tsk = ctx->single_task;
- rc = init_task(tsk, tflags, opcode, buf_size);
- if (rc != DSA_STATUS_OK) {
- err("mem: init task failed opcode %d, rc=%d\n", opcode, rc);
- return rc;
- }
-
- switch (opcode) {
- case DSA_OPCODE_MEMMOVE:
- rc = dsa_memcpy(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
-
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
- break;
-
- case DSA_OPCODE_MEMFILL:
- rc = dsa_memfill(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
-
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
- break;
+ if (ctx->dedicated == ACCFG_WQ_SHARED)
+ range = ctx->threshold;
+ else
+ range = ctx->wq_size - 1;
- case DSA_OPCODE_COMPARE:
- rc = dsa_compare(ctx);
+ while (itr > 0 && rc == DSA_STATUS_OK) {
+ i = (itr < range) ? itr : range;
+ /* Allocate memory to all the task nodes, desc, completion record*/
+ rc = alloc_multiple_tasks(ctx, i);
if (rc != DSA_STATUS_OK)
return rc;
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
+ /* allocate memory to src and dest buffers and fill in the desc for all the nodes*/
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ tsk_node->tsk->xfer_size = buf_size;
- info("Testing mismatch buffers\n");
- info("creating a diff at index %#lx\n", tsk->xfer_size / 2);
- ((uint8_t *)(tsk->src1))[tsk->xfer_size / 2] = 0;
- ((uint8_t *)(tsk->src2))[tsk->xfer_size / 2] = 1;
+ rc = init_task(tsk_node->tsk, tflags, opcode, buf_size);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- memset(tsk->comp, 0, sizeof(struct dsa_completion_record));
+ tsk_node = tsk_node->next;
+ }
- rc = dsa_compare(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
+ switch (opcode) {
+ case DSA_OPCODE_MEMMOVE:
+ rc = dsa_memcpy_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = task_result_verify(tsk, 1);
- if (rc != DSA_STATUS_OK)
- return rc;
- break;
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 0);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+ break;
- case DSA_OPCODE_COMPVAL:
- rc = dsa_compval(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
+ case DSA_OPCODE_MEMFILL:
+ rc = dsa_memfill_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 0);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+ break;
- info("Testing mismatching buffers\n");
- info("creating a diff at index %#lx\n", tsk->xfer_size / 2);
- ((uint8_t *)(tsk->src1))[tsk->xfer_size / 2] =
- ~(((uint8_t *)(tsk->src1))[tsk->xfer_size / 2]);
+ case DSA_OPCODE_COMPARE:
+ rc = dsa_compare_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 0);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ info("Testing mismatch buffers\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ((uint8_t *)(tsk_node->tsk->src1))[tsk_node->tsk->xfer_size / 2] =
+ 0;
+ ((uint8_t *)(tsk_node->tsk->src2))[tsk_node->tsk->xfer_size / 2] =
+ 1;
+ memset(tsk_node->tsk->comp, 0,
+ sizeof(struct dsa_completion_record));
+ tsk_node = tsk_node->next;
+ }
- memset(tsk->comp, 0, sizeof(struct dsa_completion_record));
+ rc = dsa_compare_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = dsa_compval(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 1);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+ break;
- rc = task_result_verify(tsk, 1);
- if (rc != DSA_STATUS_OK)
- return rc;
- break;
+ case DSA_OPCODE_COMPVAL:
+ rc = dsa_compval_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 0);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ info("Testing mismatching buffers\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ((uint8_t *)(tsk_node->tsk->src1))[tsk_node->tsk->xfer_size / 2] =
+ ~(((uint8_t *)(tsk_node->tsk->src1))[tsk_node->tsk->xfer_size / 2]);
+ memset(tsk_node->tsk->comp, 0,
+ sizeof(struct dsa_completion_record));
+ tsk_node = tsk_node->next;
+ }
- case DSA_OPCODE_DUALCAST:
- rc = dsa_dualcast(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
+ rc = dsa_compval_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
- break;
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 1);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+ break;
+ case DSA_OPCODE_DUALCAST:
+ rc = dsa_dualcast_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 0);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+ break;
+ default:
+ err("Unsupported op %#x\n", opcode);
+ return -EINVAL;
+ }
- default:
- err("Unsupported opcode %#x\n", opcode);
- return -EINVAL;
+ dsa_free_task(ctx);
+ itr = itr - range;
}
return rc;
@@ -257,8 +341,9 @@ int main(int argc, char *argv[])
int wq_id = DSA_DEVICE_ID_NO_INPUT;
int dev_id = DSA_DEVICE_ID_NO_INPUT;
int dev_wq_id = DSA_DEVICE_ID_NO_INPUT;
+ unsigned int num_desc = 1;
- while ((opt = getopt(argc, argv, "w:l:f:o:b:c:d:t:p:vh")) != -1) {
+ while ((opt = getopt(argc, argv, "w:l:f:o:b:c:d:n:t:p:vh")) != -1) {
switch (opt) {
case 'w':
wq_type = atoi(optarg);
@@ -286,6 +371,9 @@ int main(int argc, char *argv[])
return -EINVAL;
}
break;
+ case 'n':
+ num_desc = strtoul(optarg, NULL, 0);
+ break;
case 't':
ms_timeout = strtoul(optarg, NULL, 0);
break;
@@ -316,7 +404,7 @@ int main(int argc, char *argv[])
switch (opcode) {
case DSA_OPCODE_NOOP:
- rc = test_noop(dsa, tflags);
+ rc = test_noop(dsa, tflags, num_desc);
if (rc != DSA_STATUS_OK)
goto error;
break;
@@ -327,7 +415,7 @@ int main(int argc, char *argv[])
rc = -EINVAL;
goto error;
}
- rc = test_batch(dsa, buf_size, tflags, bopcode, bsize);
+ rc = test_batch(dsa, buf_size, tflags, bopcode, bsize, num_desc);
if (rc < 0)
goto error;
break;
@@ -337,7 +425,7 @@ int main(int argc, char *argv[])
case DSA_OPCODE_COMPARE:
case DSA_OPCODE_COMPVAL:
case DSA_OPCODE_DUALCAST:
- rc = test_memory(dsa, buf_size, tflags, opcode);
+ rc = test_memory(dsa, buf_size, tflags, opcode, num_desc);
if (rc != DSA_STATUS_OK)
goto error;
break;
diff --git a/test/prep.c b/test/prep.c
index ab18e53..80e977c 100644
--- a/test/prep.c
+++ b/test/prep.c
@@ -51,10 +51,10 @@ void dsa_prep_memcpy(struct task *tsk)
tsk->comp->status = 0;
}
-void dsa_reprep_memcpy(struct dsa_context *ctx)
+void dsa_reprep_memcpy(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *compl = ctx->single_task->comp;
- struct dsa_hw_desc *hw = ctx->single_task->desc;
+ struct dsa_completion_record *compl = tsk->comp;
+ struct dsa_hw_desc *hw = tsk->desc;
info("PF addr %#lx dir %d bc %#x\n",
compl->fault_addr, compl->result,
@@ -119,10 +119,10 @@ void dsa_prep_memfill(struct task *tsk)
tsk->comp->status = 0;
}
-void dsa_reprep_memfill(struct dsa_context *ctx)
+void dsa_reprep_memfill(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *compl = ctx->single_task->comp;
- struct dsa_hw_desc *hw = ctx->single_task->desc;
+ struct dsa_completion_record *compl = tsk->comp;
+ struct dsa_hw_desc *hw = tsk->desc;
info("PF addr %#lx dir %d bc %#x\n",
compl->fault_addr, compl->result,
@@ -166,10 +166,10 @@ void dsa_prep_compare(struct task *tsk)
tsk->comp->status = 0;
}
-void dsa_reprep_compare(struct dsa_context *ctx)
+void dsa_reprep_compare(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *compl = ctx->single_task->comp;
- struct dsa_hw_desc *hw = ctx->single_task->desc;
+ struct dsa_completion_record *compl = tsk->comp;
+ struct dsa_hw_desc *hw = tsk->desc;
info("PF addr %#lx dir %d bc %#x\n",
compl->fault_addr, compl->result,
@@ -214,10 +214,10 @@ void dsa_prep_compval(struct task *tsk)
tsk->comp->status = 0;
}
-void dsa_reprep_compval(struct dsa_context *ctx)
+void dsa_reprep_compval(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *compl = ctx->single_task->comp;
- struct dsa_hw_desc *hw = ctx->single_task->desc;
+ struct dsa_completion_record *compl = tsk->comp;
+ struct dsa_hw_desc *hw = tsk->desc;
info("PF addr %#lx dir %d bc %#x\n",
compl->fault_addr, compl->result,
@@ -262,10 +262,10 @@ void dsa_prep_dualcast(struct task *tsk)
tsk->comp->status = 0;
}
-void dsa_reprep_dualcast(struct dsa_context *ctx)
+void dsa_reprep_dualcast(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *compl = ctx->single_task->comp;
- struct dsa_hw_desc *hw = ctx->single_task->desc;
+ struct dsa_completion_record *compl = tsk->comp;
+ struct dsa_hw_desc *hw = tsk->desc;
info("PF addr %#lx dir %d bc %#x\n",
compl->fault_addr, compl->result,
--
2.27.0
5 months, 2 weeks
[PATCH v3 5/5] accel-config/test: add -n to input the submit descriptor number
by Tony Zhu
It is necessary to input the submit descriptor number for tests.
A large number of submissions may trigger abnormal issues.
Changelog:
V2:
- Modify as int num_itr in alloc_batch_task. I checked the code it is
int num_itr, but the sent out patch is unsigned long. The last change
after test was not syncned back.
V3:
- Allocate of struct not the size of pointer. Run 10 tests, the magic
is pass whatever there is fix or not. But it is real issue, the stree
test may trigger the issue.
Signed-off-by: Tony Zhu <tony.zhu(a)intel.com>
---
test/dsa.c | 375 ++++++++++++++++++++++++++++++++-------------
test/dsa.h | 56 ++++---
test/dsa_test.c | 392 +++++++++++++++++++++++++++++-------------------
test/prep.c | 30 ++--
4 files changed, 556 insertions(+), 297 deletions(-)
diff --git a/test/dsa.c b/test/dsa.c
index 1191254..29aee41 100644
--- a/test/dsa.c
+++ b/test/dsa.c
@@ -215,6 +215,7 @@ int dsa_alloc(struct dsa_context *ctx, int shared, int dev_id, int wq_id)
dev = accfg_wq_get_device(ctx->wq);
ctx->dedicated = accfg_wq_get_mode(ctx->wq);
ctx->wq_size = accfg_wq_get_size(ctx->wq);
+ ctx->threshold = accfg_wq_get_threshold(ctx->wq);
ctx->wq_idx = accfg_wq_get_id(ctx->wq);
ctx->bof = accfg_wq_get_block_on_fault(ctx->wq);
ctx->wq_max_batch_size = accfg_wq_get_max_batch_size(ctx->wq);
@@ -232,15 +233,23 @@ int dsa_alloc(struct dsa_context *ctx, int shared, int dev_id, int wq_id)
return 0;
}
-int alloc_task(struct dsa_context *ctx)
+int alloc_multiple_tasks(struct dsa_context *ctx, int num_itr)
{
- ctx->single_task = __alloc_task();
- if (!ctx->single_task)
- return -ENOMEM;
+ struct task_node *tmp_tsk_node;
+ int cnt = 0;
- dbg("single task allocated, desc %#lx comp %#lx\n",
- ctx->single_task->desc, ctx->single_task->comp);
+ while (cnt < num_itr) {
+ tmp_tsk_node = ctx->multi_task_node;
+ ctx->multi_task_node = (struct task_node *)malloc(sizeof(struct task_node));
+ if (!ctx->multi_task_node)
+ return -ENOMEM;
+ ctx->multi_task_node->tsk = __alloc_task();
+ if (!ctx->multi_task_node->tsk)
+ return -ENOMEM;
+ ctx->multi_task_node->next = tmp_tsk_node;
+ cnt++;
+ }
return DSA_STATUS_OK;
}
@@ -333,48 +342,60 @@ int init_task(struct task *tsk, int tflags, int opcode,
return DSA_STATUS_OK;
}
-int alloc_batch_task(struct dsa_context *ctx, unsigned int task_num)
+int alloc_batch_task(struct dsa_context *ctx, unsigned int task_num, int num_itr)
{
+ struct btask_node *btsk_node;
struct batch_task *btsk;
+ int cnt = 0;
if (!ctx->is_batch) {
err("%s is valid only if 'is_batch' is enabled", __func__);
return -EINVAL;
}
- ctx->batch_task = malloc(sizeof(struct batch_task));
- if (!ctx->batch_task)
- return -ENOMEM;
- memset(ctx->batch_task, 0, sizeof(struct batch_task));
+ while (cnt < num_itr) {
+ btsk_node = ctx->multi_btask_node;
- btsk = ctx->batch_task;
+ ctx->multi_btask_node = (struct btask_node *)
+ malloc(sizeof(struct btask_node));
+ if (!ctx->multi_btask_node)
+ return -ENOMEM;
- btsk->core_task = __alloc_task();
- if (!btsk->core_task)
- return -ENOMEM;
+ ctx->multi_btask_node->btsk = malloc(sizeof(struct batch_task));
+ if (!ctx->multi_btask_node->btsk)
+ return -ENOMEM;
+ memset(ctx->multi_btask_node->btsk, 0, sizeof(struct batch_task));
+
+ btsk = ctx->multi_btask_node->btsk;
- btsk->sub_tasks = malloc(task_num * sizeof(struct task));
- if (!btsk->sub_tasks)
- return -ENOMEM;
- memset(btsk->sub_tasks, 0, task_num * sizeof(struct task));
+ btsk->core_task = __alloc_task();
+ if (!btsk->core_task)
+ return -ENOMEM;
- btsk->sub_descs = aligned_alloc(64,
- task_num * sizeof(struct dsa_hw_desc));
- if (!btsk->sub_descs)
- return -ENOMEM;
- memset(btsk->sub_descs, 0, task_num * sizeof(struct dsa_hw_desc));
+ btsk->sub_tasks = malloc(task_num * sizeof(struct task));
+ if (!btsk->sub_tasks)
+ return -ENOMEM;
+ memset(btsk->sub_tasks, 0, task_num * sizeof(struct task));
- btsk->sub_comps = aligned_alloc(32,
- task_num * sizeof(struct dsa_completion_record));
- if (!btsk->sub_comps)
- return -ENOMEM;
- memset(btsk->sub_comps, 0,
- task_num * sizeof(struct dsa_completion_record));
+ btsk->sub_descs = aligned_alloc(64, task_num * sizeof(struct dsa_hw_desc));
+ if (!btsk->sub_descs)
+ return -ENOMEM;
+ memset(btsk->sub_descs, 0, task_num * sizeof(struct dsa_hw_desc));
- dbg("batch task allocated %#lx, ctask %#lx, sub_tasks %#lx\n",
- btsk, btsk->core_task, btsk->sub_tasks);
- dbg("sub_descs %#lx, sub_comps %#lx\n",
- btsk->sub_descs, btsk->sub_comps);
+ btsk->sub_comps =
+ aligned_alloc(32, task_num * sizeof(struct dsa_completion_record));
+ if (!btsk->sub_comps)
+ return -ENOMEM;
+ memset(btsk->sub_comps, 0,
+ task_num * sizeof(struct dsa_completion_record));
+
+ dbg("batch task allocated %#lx, ctask %#lx, sub_tasks %#lx\n",
+ btsk, btsk->core_task, btsk->sub_tasks);
+ dbg("sub_descs %#lx, sub_comps %#lx\n",
+ btsk->sub_descs, btsk->sub_comps);
+ ctx->multi_btask_node->next = btsk_node;
+ cnt++;
+ }
return DSA_STATUS_OK;
}
@@ -541,10 +562,31 @@ void dsa_free(struct dsa_context *ctx)
void dsa_free_task(struct dsa_context *ctx)
{
- if (!ctx->is_batch)
- free_task(ctx->single_task);
- else
- free_batch_task(ctx->batch_task);
+ if (!ctx->is_batch) {
+ struct task_node *tsk_node = NULL, *tmp_node = NULL;
+
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ tmp_node = tsk_node->next;
+ free_task(tsk_node->tsk);
+ tsk_node->tsk = NULL;
+ free(tsk_node);
+ tsk_node = tmp_node;
+ }
+ ctx->multi_task_node = NULL;
+ } else {
+ struct btask_node *tsk_node = NULL, *tmp_node = NULL;
+
+ tsk_node = ctx->multi_btask_node;
+ while (tsk_node) {
+ tmp_node = tsk_node->next;
+ free_batch_task(tsk_node->btsk);
+ tsk_node->btsk = NULL;
+ free(tsk_node);
+ tsk_node = tmp_node;
+ }
+ ctx->multi_task_node = NULL;
+ }
}
void free_task(struct task *tsk)
@@ -594,9 +636,9 @@ void free_batch_task(struct batch_task *btsk)
free(btsk);
}
-int dsa_wait_noop(struct dsa_context *ctx)
+int dsa_wait_noop(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
rc = dsa_wait_on_desc_timeout(comp, ms_timeout);
@@ -608,25 +650,40 @@ int dsa_wait_noop(struct dsa_context *ctx)
return DSA_STATUS_OK;
}
-int dsa_noop(struct dsa_context *ctx)
+int dsa_noop_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- dsa_prep_noop(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_noop(ctx);
+ dsa_prep_noop(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
+ tsk_node = ctx->multi_task_node;
+ info("Submitted all noop jobs\n");
+
+ while (tsk_node) {
+ ret = dsa_wait_noop(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
-int dsa_wait_batch(struct dsa_context *ctx)
+int dsa_wait_batch(struct batch_task *btsk)
{
int rc;
- struct batch_task *btsk = ctx->batch_task;
struct task *ctsk = btsk->core_task;
info("wait batch\n");
@@ -641,10 +698,10 @@ int dsa_wait_batch(struct dsa_context *ctx)
return DSA_STATUS_OK;
}
-int dsa_wait_memcpy(struct dsa_context *ctx)
+int dsa_wait_memcpy(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_hw_desc *desc = ctx->single_task->desc;
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_hw_desc *desc = tsk->desc;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
again:
@@ -657,33 +714,50 @@ again:
/* re-submit if PAGE_FAULT reported by HW && BOF is off */
if (stat_val(comp->status) == DSA_COMP_PAGE_FAULT_NOBOF &&
!(desc->flags & IDXD_OP_FLAG_BOF)) {
- dsa_reprep_memcpy(ctx);
+ dsa_reprep_memcpy(ctx, tsk);
goto again;
}
return DSA_STATUS_OK;
}
-int dsa_memcpy(struct dsa_context *ctx)
+int dsa_memcpy_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
- tsk->dflags |= IDXD_OP_FLAG_BOF;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tsk_node->tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
+ tsk_node->tsk->dflags |= IDXD_OP_FLAG_BOF;
- dsa_prep_memcpy(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_memcpy(ctx);
+ dsa_prep_memcpy(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
+
+ info("Submitted all memcpy jobs\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
+
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ret = dsa_wait_memcpy(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
-int dsa_wait_memfill(struct dsa_context *ctx)
+int dsa_wait_memfill(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_hw_desc *desc = ctx->single_task->desc;
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_hw_desc *desc = tsk->desc;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
again:
@@ -697,33 +771,50 @@ again:
/* re-submit if PAGE_FAULT reported by HW && BOF is off */
if (stat_val(comp->status) == DSA_COMP_PAGE_FAULT_NOBOF &&
!(desc->flags & IDXD_OP_FLAG_BOF)) {
- dsa_reprep_memfill(ctx);
+ dsa_reprep_memfill(ctx, tsk);
goto again;
}
return DSA_STATUS_OK;
}
-int dsa_memfill(struct dsa_context *ctx)
+int dsa_memfill_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
- tsk->dflags |= IDXD_OP_FLAG_BOF;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tsk_node->tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
+ tsk_node->tsk->dflags |= IDXD_OP_FLAG_BOF;
+
+ dsa_prep_memfill(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
+
+ info("Submitted all memcpy jobs\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
- dsa_prep_memfill(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_memfill(ctx);
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ret = dsa_wait_memfill(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
-int dsa_wait_compare(struct dsa_context *ctx)
+int dsa_wait_compare(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_hw_desc *desc = ctx->single_task->desc;
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_hw_desc *desc = tsk->desc;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
again:
@@ -737,33 +828,50 @@ again:
/* re-submit if PAGE_FAULT reported by HW && BOF is off */
if (stat_val(comp->status) == DSA_COMP_PAGE_FAULT_NOBOF &&
!(desc->flags & IDXD_OP_FLAG_BOF)) {
- dsa_reprep_compare(ctx);
+ dsa_reprep_compare(ctx, tsk);
goto again;
}
return DSA_STATUS_OK;
}
-int dsa_compare(struct dsa_context *ctx)
+int dsa_compare_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
- tsk->dflags |= IDXD_OP_FLAG_BOF;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tsk_node->tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
+ tsk_node->tsk->dflags |= IDXD_OP_FLAG_BOF;
+
+ dsa_prep_compare(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
+
+ info("Submitted all memcpy jobs\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
- dsa_prep_compare(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_compare(ctx);
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ret = dsa_wait_compare(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
-int dsa_wait_compval(struct dsa_context *ctx)
+int dsa_wait_compval(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_hw_desc *desc = ctx->single_task->desc;
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_hw_desc *desc = tsk->desc;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
again:
@@ -777,33 +885,50 @@ again:
/* re-submit if PAGE_FAULT reported by HW && BOF is off */
if (stat_val(comp->status) == DSA_COMP_PAGE_FAULT_NOBOF &&
!(desc->flags & IDXD_OP_FLAG_BOF)) {
- dsa_reprep_compval(ctx);
+ dsa_reprep_compval(ctx, tsk);
goto again;
}
return DSA_STATUS_OK;
}
-int dsa_compval(struct dsa_context *ctx)
+int dsa_compval_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
- tsk->dflags |= IDXD_OP_FLAG_BOF;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tsk_node->tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
+ tsk_node->tsk->dflags |= IDXD_OP_FLAG_BOF;
+
+ dsa_prep_compval(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
- dsa_prep_compval(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_compval(ctx);
+ info("Submitted all memcpy jobs\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
+
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ret = dsa_wait_compval(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
-int dsa_wait_dualcast(struct dsa_context *ctx)
+int dsa_wait_dualcast(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_hw_desc *desc = ctx->single_task->desc;
- struct dsa_completion_record *comp = ctx->single_task->comp;
+ struct dsa_hw_desc *desc = tsk->desc;
+ struct dsa_completion_record *comp = tsk->comp;
int rc;
again:
@@ -816,25 +941,42 @@ again:
/* re-submit if PAGE_FAULT reported by HW && BOF is off */
if (stat_val(comp->status) == DSA_COMP_PAGE_FAULT_NOBOF &&
!(desc->flags & IDXD_OP_FLAG_BOF)) {
- dsa_reprep_dualcast(ctx);
+ dsa_reprep_dualcast(ctx, tsk);
goto again;
}
return DSA_STATUS_OK;
}
-int dsa_dualcast(struct dsa_context *ctx)
+int dsa_dualcast_multi_task_nodes(struct dsa_context *ctx)
{
- struct task *tsk = ctx->single_task;
+ struct task_node *tsk_node = ctx->multi_task_node;
int ret = DSA_STATUS_OK;
- tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
- tsk->dflags |= IDXD_OP_FLAG_BOF;
+ while (tsk_node) {
+ tsk_node->tsk->dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tsk_node->tsk->test_flags & TEST_FLAGS_BOF) && ctx->bof)
+ tsk_node->tsk->dflags |= IDXD_OP_FLAG_BOF;
- dsa_prep_dualcast(tsk);
- dsa_desc_submit(ctx, tsk->desc);
- ret = dsa_wait_dualcast(ctx);
+ dsa_prep_dualcast(tsk_node->tsk);
+ tsk_node = tsk_node->next;
+ }
+
+ info("Submitted all memcpy jobs\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ dsa_desc_submit(ctx, tsk_node->tsk->desc);
+ tsk_node = tsk_node->next;
+ }
+
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ret = dsa_wait_dualcast(ctx, tsk_node->tsk);
+ if (ret != DSA_STATUS_OK)
+ info("Desc: %p failed with ret: %d\n",
+ tsk_node->tsk->desc, tsk_node->tsk->comp->status);
+ tsk_node = tsk_node->next;
+ }
return ret;
}
@@ -872,6 +1014,23 @@ int task_result_verify(struct task *tsk, int mismatch_expected)
return DSA_STATUS_OK;
}
+int task_result_verify_task_nodes(struct dsa_context *ctx, int mismatch_expected)
+{
+ struct task_node *tsk_node = ctx->multi_task_node;
+ int ret = DSA_STATUS_OK;
+
+ while (tsk_node) {
+ ret = task_result_verify(tsk_node->tsk, mismatch_expected);
+ if (ret != DSA_STATUS_OK) {
+ err("memory result verify failed %d\n", ret);
+ return ret;
+ }
+ tsk_node = tsk_node->next;
+ }
+
+ return ret;
+}
+
int task_result_verify_memcpy(struct task *tsk, int mismatch_expected)
{
int rc;
diff --git a/test/dsa.h b/test/dsa.h
index df2fc09..382fc66 100644
--- a/test/dsa.h
+++ b/test/dsa.h
@@ -62,6 +62,11 @@ struct task {
int test_flags;
};
+struct task_node {
+ struct task *tsk;
+ struct task_node *next;
+};
+
/* metadata for batch DSA task */
struct batch_task {
struct task *core_task; /* core task with batch opcode 0x1*/
@@ -72,6 +77,11 @@ struct batch_task {
int test_flags;
};
+struct btask_node {
+ struct batch_task *btsk;
+ struct btask_node *next;
+};
+
struct dsa_context {
struct accfg_ctx *ctx;
struct accfg_wq *wq;
@@ -84,6 +94,7 @@ struct dsa_context {
int wq_idx;
void *wq_reg;
int wq_size;
+ int threshold;
int dedicated;
int bof;
unsigned int wq_max_batch_size;
@@ -92,8 +103,8 @@ struct dsa_context {
int is_batch;
union {
- struct task *single_task;
- struct batch_task *batch_task;
+ struct task_node *multi_task_node;
+ struct btask_node *multi_btask_node;
};
};
@@ -214,42 +225,43 @@ int dsa_enqcmd(struct dsa_context *ctx, struct dsa_hw_desc *hw);
struct dsa_context *dsa_init(void);
int dsa_alloc(struct dsa_context *ctx, int shared, int dev_id, int wq_id);
-int alloc_task(struct dsa_context *ctx);
+int alloc_multiple_tasks(struct dsa_context *ctx, int num_itr);
struct task *__alloc_task(void);
int init_task(struct task *tsk, int tflags, int opcode,
unsigned long xfer_size);
-int dsa_noop(struct dsa_context *ctx);
-int dsa_wait_noop(struct dsa_context *ctx);
+int dsa_noop_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_noop(struct dsa_context *ctx, struct task *tsk);
-int dsa_memcpy(struct dsa_context *ctx);
-int dsa_wait_memcpy(struct dsa_context *ctx);
+int dsa_memcpy_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_memcpy(struct dsa_context *ctx, struct task *tsk);
-int dsa_memfill(struct dsa_context *ctx);
-int dsa_wait_memfill(struct dsa_context *ctx);
+int dsa_memfill_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_memfill(struct dsa_context *ctx, struct task *tsk);
-int dsa_compare(struct dsa_context *ctx);
-int dsa_wait_compare(struct dsa_context *ctx);
+int dsa_compare_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_compare(struct dsa_context *ctx, struct task *tsk);
-int dsa_compval(struct dsa_context *ctx);
-int dsa_wait_compval(struct dsa_context *ctx);
+int dsa_compval_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_compval(struct dsa_context *ctx, struct task *tsk);
-int dsa_dualcast(struct dsa_context *ctx);
-int dsa_wait_dualcast(struct dsa_context *ctx);
+int dsa_dualcast_multi_task_nodes(struct dsa_context *ctx);
+int dsa_wait_dualcast(struct dsa_context *ctx, struct task *tsk);
void dsa_prep_noop(struct task *tsk);
void dsa_prep_memcpy(struct task *tsk);
-void dsa_reprep_memcpy(struct dsa_context *ctx);
+void dsa_reprep_memcpy(struct dsa_context *ctx, struct task *tsk);
void dsa_prep_memfill(struct task *tsk);
-void dsa_reprep_memfill(struct dsa_context *ctx);
+void dsa_reprep_memfill(struct dsa_context *ctx, struct task *tsk);
void dsa_prep_compare(struct task *tsk);
-void dsa_reprep_compare(struct dsa_context *ctx);
+void dsa_reprep_compare(struct dsa_context *ctx, struct task *tsk);
void dsa_prep_compval(struct task *tsk);
-void dsa_reprep_compval(struct dsa_context *ctx);
+void dsa_reprep_compval(struct dsa_context *ctx, struct task *tsk);
void dsa_prep_dualcast(struct task *tsk);
-void dsa_reprep_dualcast(struct dsa_context *ctx);
+void dsa_reprep_dualcast(struct dsa_context *ctx, struct task *tsk);
int task_result_verify(struct task *tsk, int mismatch_expected);
+int task_result_verify_task_nodes(struct dsa_context *ctx, int mismatch_expected);
int task_result_verify_memcpy(struct task *tsk, int mismatch_expected);
int task_result_verify_memfill(struct task *tsk, int mismatch_expected);
int task_result_verify_compare(struct task *tsk, int mismatch_expected);
@@ -257,7 +269,7 @@ int task_result_verify_compval(struct task *tsk, int mismatch_expected);
int task_result_verify_dualcast(struct task *tsk, int mismatch_expected);
int batch_result_verify(struct batch_task *btsk, int bof);
-int alloc_batch_task(struct dsa_context *ctx, unsigned int task_num);
+int alloc_batch_task(struct dsa_context *ctx, unsigned int task_num, int num_itr);
int init_batch_task(struct batch_task *btsk, int task_num, int tflags,
int opcode, unsigned long xfer_size, unsigned long dflags);
@@ -268,7 +280,7 @@ void dsa_prep_batch_memfill(struct batch_task *btsk);
void dsa_prep_batch_compare(struct batch_task *btsk);
void dsa_prep_batch_compval(struct batch_task *btsk);
void dsa_prep_batch_dualcast(struct batch_task *btsk);
-int dsa_wait_batch(struct dsa_context *ctx);
+int dsa_wait_batch(struct batch_task *btsk);
void dsa_free(struct dsa_context *ctx);
void dsa_free_task(struct dsa_context *ctx);
diff --git a/test/dsa_test.c b/test/dsa_test.c
index cd3bd50..eccf89c 100644
--- a/test/dsa_test.c
+++ b/test/dsa_test.c
@@ -24,19 +24,22 @@ static void usage(void)
"-b <opcode> ; if batch opcode, opcode in the batch\n"
"-c <batch_size> ; if batch opcode, number of descriptors for batch\n"
"-d ; wq device such as dsa0/wq0.0\n"
+ "-n <number of descriptors> ;descriptor count to submit\n"
"-t <ms timeout> ; ms to wait for descs to complete\n"
"-v ; verbose\n"
"-h ; print this message\n");
}
static int test_batch(struct dsa_context *ctx, size_t buf_size,
- int tflags, uint32_t bopcode, unsigned int bsize)
+ int tflags, uint32_t bopcode, unsigned int bsize, int num_desc)
{
+ struct btask_node *btsk_node;
unsigned long dflags;
- int rc = 0;
+ int rc = DSA_STATUS_OK;
+ int itr = num_desc, i = 0, range = 0;
- info("batch: len %#lx tflags %#x bopcode %#x batch_no %d\n",
- buf_size, tflags, bopcode, bsize);
+ info("batch: len %#lx tflags %#x bopcode %#x batch_no %d num_desc %ld\n",
+ buf_size, tflags, bopcode, bsize, num_desc);
if (bopcode == DSA_OPCODE_BATCH) {
err("Can't have batch op inside batch op\n");
@@ -45,198 +48,279 @@ static int test_batch(struct dsa_context *ctx, size_t buf_size,
ctx->is_batch = 1;
- rc = alloc_batch_task(ctx, bsize);
- if (rc != DSA_STATUS_OK)
- return rc;
+ if (ctx->dedicated == ACCFG_WQ_SHARED)
+ range = ctx->threshold;
+ else
+ range = ctx->wq_size - 1;
- dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
- if ((tflags & TEST_FLAGS_BOF) && ctx->bof)
- dflags |= IDXD_OP_FLAG_BOF;
+ while (itr > 0 && rc == DSA_STATUS_OK) {
+ i = (itr < range) ? itr : range;
+ rc = alloc_batch_task(ctx, bsize, i);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = init_batch_task(ctx->batch_task, bsize, tflags, bopcode,
- buf_size, dflags);
- if (rc != DSA_STATUS_OK)
- return rc;
+ dflags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR;
+ if ((tflags & TEST_FLAGS_BOF) && ctx->bof)
+ dflags |= IDXD_OP_FLAG_BOF;
+
+ /* allocate memory to src and dest buffers and fill in the desc for all the nodes*/
+ btsk_node = ctx->multi_btask_node;
+ while (btsk_node) {
+ rc = init_batch_task(btsk_node->btsk, bsize, tflags, bopcode,
+ buf_size, dflags);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ switch (bopcode) {
+ case DSA_OPCODE_NOOP:
+ dsa_prep_batch_noop(btsk_node->btsk);
+ break;
+
+ case DSA_OPCODE_MEMMOVE:
+ dsa_prep_batch_memcpy(btsk_node->btsk);
+ break;
+
+ case DSA_OPCODE_MEMFILL:
+ dsa_prep_batch_memfill(btsk_node->btsk);
+ break;
+
+ case DSA_OPCODE_COMPARE:
+ dsa_prep_batch_compare(btsk_node->btsk);
+ break;
+ case DSA_OPCODE_COMPVAL:
+ dsa_prep_batch_compval(btsk_node->btsk);
+ break;
+
+ case DSA_OPCODE_DUALCAST:
+ dsa_prep_batch_dualcast(btsk_node->btsk);
+ break;
+ default:
+ err("Unsupported op %#x\n", bopcode);
+ return -EINVAL;
+ }
- switch (bopcode) {
- case DSA_OPCODE_NOOP:
- dsa_prep_batch_noop(ctx->batch_task);
- break;
- case DSA_OPCODE_MEMMOVE:
- dsa_prep_batch_memcpy(ctx->batch_task);
- break;
+ btsk_node = btsk_node->next;
+ }
- case DSA_OPCODE_MEMFILL:
- dsa_prep_batch_memfill(ctx->batch_task);
- break;
+ btsk_node = ctx->multi_btask_node;
+ while (btsk_node) {
+ dsa_prep_batch(btsk_node->btsk, dflags);
+ dump_sub_desc(btsk_node->btsk);
+ btsk_node = btsk_node->next;
+ }
- case DSA_OPCODE_COMPARE:
- dsa_prep_batch_compare(ctx->batch_task);
- break;
+ btsk_node = ctx->multi_btask_node;
+ while (btsk_node) {
+ dsa_desc_submit(ctx, btsk_node->btsk->core_task->desc);
+ btsk_node = btsk_node->next;
+ }
- case DSA_OPCODE_COMPVAL:
- dsa_prep_batch_compval(ctx->batch_task);
- break;
- case DSA_OPCODE_DUALCAST:
- dsa_prep_batch_dualcast(ctx->batch_task);
- break;
- default:
- err("Unsupported op %#x\n", bopcode);
- return -EINVAL;
- }
+ btsk_node = ctx->multi_btask_node;
+ while (btsk_node) {
+ rc = dsa_wait_batch(btsk_node->btsk);
+ if (rc != DSA_STATUS_OK)
+ err("batch failed stat %d\n", rc);
+ btsk_node = btsk_node->next;
+ }
- dsa_prep_batch(ctx->batch_task, dflags);
- dump_sub_desc(ctx->batch_task);
- dsa_desc_submit(ctx, ctx->batch_task->core_task->desc);
+ btsk_node = ctx->multi_btask_node;
+ while (btsk_node) {
+ rc = batch_result_verify(btsk_node->btsk, dflags & IDXD_OP_FLAG_BOF);
+ btsk_node = btsk_node->next;
+ }
- rc = dsa_wait_batch(ctx);
- if (rc != DSA_STATUS_OK) {
- err("batch failed stat %d\n", rc);
- rc = -ENXIO;
+ dsa_free_task(ctx);
+ itr = itr - range;
}
- rc = batch_result_verify(ctx->batch_task, dflags & IDXD_OP_FLAG_BOF);
-
return rc;
}
-static int test_noop(struct dsa_context *ctx, int tflags)
+static int test_noop(struct dsa_context *ctx, int tflags, int num_desc)
{
- struct task *tsk;
- int rc;
+ struct task_node *tsk_node;
+ int rc = DSA_STATUS_OK;
+ int itr = num_desc, i = 0, range = 0;
- info("noop: tflags %#x\n", tflags);
+ info("testnoop: tflags %#x num_desc %ld\n", tflags, num_desc);
ctx->is_batch = 0;
- rc = alloc_task(ctx);
- if (rc != DSA_STATUS_OK) {
- err("noop: alloc task failed, rc=%d\n", rc);
- return rc;
- }
+ if (ctx->dedicated == ACCFG_WQ_SHARED)
+ range = ctx->threshold;
+ else
+ range = ctx->wq_size - 1;
- tsk = ctx->single_task;
+ while (itr > 0 && rc == DSA_STATUS_OK) {
+ i = (itr < range) ? itr : range;
+ /* Allocate memory to all the task nodes, desc, completion record*/
+ rc = alloc_multiple_tasks(ctx, i);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = dsa_noop(ctx);
- if (rc != DSA_STATUS_OK) {
- err("noop failed stat %d\n", rc);
- return rc;
- }
+ /* allocate memory to src and dest buffers and fill in the desc for all the nodes*/
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ tsk_node->tsk->opcode = DSA_OPCODE_NOOP;
+ tsk_node->tsk->test_flags = tflags;
+ tsk_node = tsk_node->next;
+ }
+
+ rc = dsa_noop_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
+ /* Verification of all the nodes*/
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ rc = task_result_verify(tsk_node->tsk, 0);
+ tsk_node = tsk_node->next;
+ }
+
+ dsa_free_task(ctx);
+ itr = itr - range;
+ }
return rc;
}
static int test_memory(struct dsa_context *ctx, size_t buf_size,
- int tflags, uint32_t opcode)
+ int tflags, uint32_t opcode, int num_desc)
{
- struct task *tsk;
- int rc;
+ struct task_node *tsk_node;
+ int rc = DSA_STATUS_OK;
+ int itr = num_desc, i = 0, range = 0;
- info("mem: len %#lx tflags %#x opcode %d\n", buf_size, tflags, opcode);
+ info("testmemory: opcode %d len %#lx tflags %#x num_desc %ld\n",
+ opcode, buf_size, tflags, num_desc);
ctx->is_batch = 0;
- rc = alloc_task(ctx);
- if (rc != DSA_STATUS_OK) {
- err("mem: alloc task failed opcode %d, rc=%d\n", opcode, rc);
- return rc;
- }
-
- tsk = ctx->single_task;
- rc = init_task(tsk, tflags, opcode, buf_size);
- if (rc != DSA_STATUS_OK) {
- err("mem: init task failed opcode %d, rc=%d\n", opcode, rc);
- return rc;
- }
-
- switch (opcode) {
- case DSA_OPCODE_MEMMOVE:
- rc = dsa_memcpy(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
-
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
- break;
-
- case DSA_OPCODE_MEMFILL:
- rc = dsa_memfill(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
-
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
- break;
+ if (ctx->dedicated == ACCFG_WQ_SHARED)
+ range = ctx->threshold;
+ else
+ range = ctx->wq_size - 1;
- case DSA_OPCODE_COMPARE:
- rc = dsa_compare(ctx);
+ while (itr > 0 && rc == DSA_STATUS_OK) {
+ i = (itr < range) ? itr : range;
+ /* Allocate memory to all the task nodes, desc, completion record*/
+ rc = alloc_multiple_tasks(ctx, i);
if (rc != DSA_STATUS_OK)
return rc;
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
+ /* allocate memory to src and dest buffers and fill in the desc for all the nodes*/
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ tsk_node->tsk->xfer_size = buf_size;
- info("Testing mismatch buffers\n");
- info("creating a diff at index %#lx\n", tsk->xfer_size / 2);
- ((uint8_t *)(tsk->src1))[tsk->xfer_size / 2] = 0;
- ((uint8_t *)(tsk->src2))[tsk->xfer_size / 2] = 1;
+ rc = init_task(tsk_node->tsk, tflags, opcode, buf_size);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- memset(tsk->comp, 0, sizeof(struct dsa_completion_record));
+ tsk_node = tsk_node->next;
+ }
- rc = dsa_compare(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
+ switch (opcode) {
+ case DSA_OPCODE_MEMMOVE:
+ rc = dsa_memcpy_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = task_result_verify(tsk, 1);
- if (rc != DSA_STATUS_OK)
- return rc;
- break;
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 0);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+ break;
- case DSA_OPCODE_COMPVAL:
- rc = dsa_compval(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
+ case DSA_OPCODE_MEMFILL:
+ rc = dsa_memfill_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 0);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+ break;
- info("Testing mismatching buffers\n");
- info("creating a diff at index %#lx\n", tsk->xfer_size / 2);
- ((uint8_t *)(tsk->src1))[tsk->xfer_size / 2] =
- ~(((uint8_t *)(tsk->src1))[tsk->xfer_size / 2]);
+ case DSA_OPCODE_COMPARE:
+ rc = dsa_compare_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 0);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ info("Testing mismatch buffers\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ((uint8_t *)(tsk_node->tsk->src1))[tsk_node->tsk->xfer_size / 2] =
+ 0;
+ ((uint8_t *)(tsk_node->tsk->src2))[tsk_node->tsk->xfer_size / 2] =
+ 1;
+ memset(tsk_node->tsk->comp, 0,
+ sizeof(struct dsa_completion_record));
+ tsk_node = tsk_node->next;
+ }
- memset(tsk->comp, 0, sizeof(struct dsa_completion_record));
+ rc = dsa_compare_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = dsa_compval(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 1);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+ break;
- rc = task_result_verify(tsk, 1);
- if (rc != DSA_STATUS_OK)
- return rc;
- break;
+ case DSA_OPCODE_COMPVAL:
+ rc = dsa_compval_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 0);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ info("Testing mismatching buffers\n");
+ tsk_node = ctx->multi_task_node;
+ while (tsk_node) {
+ ((uint8_t *)(tsk_node->tsk->src1))[tsk_node->tsk->xfer_size / 2] =
+ ~(((uint8_t *)(tsk_node->tsk->src1))[tsk_node->tsk->xfer_size / 2]);
+ memset(tsk_node->tsk->comp, 0,
+ sizeof(struct dsa_completion_record));
+ tsk_node = tsk_node->next;
+ }
- case DSA_OPCODE_DUALCAST:
- rc = dsa_dualcast(ctx);
- if (rc != DSA_STATUS_OK)
- return rc;
+ rc = dsa_compval_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
- rc = task_result_verify(tsk, 0);
- if (rc != DSA_STATUS_OK)
- return rc;
- break;
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 1);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+ break;
+ case DSA_OPCODE_DUALCAST:
+ rc = dsa_dualcast_multi_task_nodes(ctx);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+
+ /* Verification of all the nodes*/
+ rc = task_result_verify_task_nodes(ctx, 0);
+ if (rc != DSA_STATUS_OK)
+ return rc;
+ break;
+ default:
+ err("Unsupported op %#x\n", opcode);
+ return -EINVAL;
+ }
- default:
- err("Unsupported opcode %#x\n", opcode);
- return -EINVAL;
+ dsa_free_task(ctx);
+ itr = itr - range;
}
return rc;
@@ -257,8 +341,9 @@ int main(int argc, char *argv[])
int wq_id = DSA_DEVICE_ID_NO_INPUT;
int dev_id = DSA_DEVICE_ID_NO_INPUT;
int dev_wq_id = DSA_DEVICE_ID_NO_INPUT;
+ unsigned int num_desc = 1;
- while ((opt = getopt(argc, argv, "w:l:f:o:b:c:d:t:p:vh")) != -1) {
+ while ((opt = getopt(argc, argv, "w:l:f:o:b:c:d:n:t:p:vh")) != -1) {
switch (opt) {
case 'w':
wq_type = atoi(optarg);
@@ -286,6 +371,9 @@ int main(int argc, char *argv[])
return -EINVAL;
}
break;
+ case 'n':
+ num_desc = strtoul(optarg, NULL, 0);
+ break;
case 't':
ms_timeout = strtoul(optarg, NULL, 0);
break;
@@ -316,7 +404,7 @@ int main(int argc, char *argv[])
switch (opcode) {
case DSA_OPCODE_NOOP:
- rc = test_noop(dsa, tflags);
+ rc = test_noop(dsa, tflags, num_desc);
if (rc != DSA_STATUS_OK)
goto error;
break;
@@ -327,7 +415,7 @@ int main(int argc, char *argv[])
rc = -EINVAL;
goto error;
}
- rc = test_batch(dsa, buf_size, tflags, bopcode, bsize);
+ rc = test_batch(dsa, buf_size, tflags, bopcode, bsize, num_desc);
if (rc < 0)
goto error;
break;
@@ -337,7 +425,7 @@ int main(int argc, char *argv[])
case DSA_OPCODE_COMPARE:
case DSA_OPCODE_COMPVAL:
case DSA_OPCODE_DUALCAST:
- rc = test_memory(dsa, buf_size, tflags, opcode);
+ rc = test_memory(dsa, buf_size, tflags, opcode, num_desc);
if (rc != DSA_STATUS_OK)
goto error;
break;
diff --git a/test/prep.c b/test/prep.c
index ab18e53..80e977c 100644
--- a/test/prep.c
+++ b/test/prep.c
@@ -51,10 +51,10 @@ void dsa_prep_memcpy(struct task *tsk)
tsk->comp->status = 0;
}
-void dsa_reprep_memcpy(struct dsa_context *ctx)
+void dsa_reprep_memcpy(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *compl = ctx->single_task->comp;
- struct dsa_hw_desc *hw = ctx->single_task->desc;
+ struct dsa_completion_record *compl = tsk->comp;
+ struct dsa_hw_desc *hw = tsk->desc;
info("PF addr %#lx dir %d bc %#x\n",
compl->fault_addr, compl->result,
@@ -119,10 +119,10 @@ void dsa_prep_memfill(struct task *tsk)
tsk->comp->status = 0;
}
-void dsa_reprep_memfill(struct dsa_context *ctx)
+void dsa_reprep_memfill(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *compl = ctx->single_task->comp;
- struct dsa_hw_desc *hw = ctx->single_task->desc;
+ struct dsa_completion_record *compl = tsk->comp;
+ struct dsa_hw_desc *hw = tsk->desc;
info("PF addr %#lx dir %d bc %#x\n",
compl->fault_addr, compl->result,
@@ -166,10 +166,10 @@ void dsa_prep_compare(struct task *tsk)
tsk->comp->status = 0;
}
-void dsa_reprep_compare(struct dsa_context *ctx)
+void dsa_reprep_compare(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *compl = ctx->single_task->comp;
- struct dsa_hw_desc *hw = ctx->single_task->desc;
+ struct dsa_completion_record *compl = tsk->comp;
+ struct dsa_hw_desc *hw = tsk->desc;
info("PF addr %#lx dir %d bc %#x\n",
compl->fault_addr, compl->result,
@@ -214,10 +214,10 @@ void dsa_prep_compval(struct task *tsk)
tsk->comp->status = 0;
}
-void dsa_reprep_compval(struct dsa_context *ctx)
+void dsa_reprep_compval(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *compl = ctx->single_task->comp;
- struct dsa_hw_desc *hw = ctx->single_task->desc;
+ struct dsa_completion_record *compl = tsk->comp;
+ struct dsa_hw_desc *hw = tsk->desc;
info("PF addr %#lx dir %d bc %#x\n",
compl->fault_addr, compl->result,
@@ -262,10 +262,10 @@ void dsa_prep_dualcast(struct task *tsk)
tsk->comp->status = 0;
}
-void dsa_reprep_dualcast(struct dsa_context *ctx)
+void dsa_reprep_dualcast(struct dsa_context *ctx, struct task *tsk)
{
- struct dsa_completion_record *compl = ctx->single_task->comp;
- struct dsa_hw_desc *hw = ctx->single_task->desc;
+ struct dsa_completion_record *compl = tsk->comp;
+ struct dsa_hw_desc *hw = tsk->desc;
info("PF addr %#lx dir %d bc %#x\n",
compl->fault_addr, compl->result,
--
2.27.0
5 months, 2 weeks
[PATCH v1 0/5] Support the multi descriptor submit
by Tony Zhu
To support -n, the test code is re-constructed. The single task is
replaced with multi tasks.
Checked with checkpatch.pl, warnings and checks are reported. They
are removed except kernel type checks u64, u32 and u8.
No function change, the random pattern and tflags keep same.
Tony Zhu (5):
accel-config/test: remove dsa_test code warnings scanned by
checkpatch.pl
accel-config/test: remove dsa_test code checks scanned by
checkpatch.pl
accel-config/test: add test_noop function to replace the test in main
accel-config/test: add test_memory function to remove 5 opcode test
duplicated parts
accel-config/test: add -n to input the submit descriptor number
test/dsa.c | 452 ++++++++++++++++++++++++++-------------
test/dsa.h | 62 +++---
test/dsa_test.c | 554 +++++++++++++++++++++++++-----------------------
test/prep.c | 126 +++++------
4 files changed, 692 insertions(+), 502 deletions(-)
--
2.27.0
5 months, 3 weeks
[PATCH v1 1/1] accel-config/test: fix iax_crypto load/unload issue
by Tony Zhu
When iax crypto test is configured, have to remove iax_crypto module
before remove idxd.
Signed-off-by: Tony Zhu <tony.zhu(a)intel.com>
---
test/common | 3 +++
1 file changed, 3 insertions(+)
diff --git a/test/common b/test/common
index 04a0e11..3f5a88a 100644
--- a/test/common
+++ b/test/common
@@ -105,6 +105,9 @@ _cleanup()
lsmod | grep -q "idxd_uacce" && {
rmmod idxd_uacce
}
+ lsmod | grep -q "iax_crypto" && {
+ rmmod iax_crypto
+ }
lsmod | grep -q "idxd" && {
rmmod idxd
}
--
2.27.0
5 months, 3 weeks
[PATCH 0/5] Remove redundant dependencies
by Ramesh Thomas
libkmod and libudev are not necessary and are removed in this patchset.
Presence of idxd kernel module is validated and if not loaded, an error
message is printed before exiting the app. Accel-config will not try to
load the kernel modules. The asumption is that systems where
accel-config is run will have the necessary kernel modules already
installed.
Ramesh Thomas (5):
accel-config: Remove use of libkmod
accel-config: Remove libkmod dependencies in build files
accel-config: Remove references to unused libudev
accel-config: Remove libkmod dependencies in build files
accel-config: Check for file existence instead of read permission
Makefile.am.in | 2 --
README.md | 4 +--
accfg-test.spec.in | 2 --
accfg.spec.in | 2 --
accfg/Makefile.am | 3 +--
accfg/accel-config.c | 40 +++-------------------------
accfg/lib/Makefile.am | 4 +--
accfg/lib/libaccfg.c | 4 +--
accfg/lib/private.h | 12 +--------
configure.ac | 2 --
debian/control | 2 --
test/Makefile.am | 2 +-
test/core.c | 1 -
test/libaccfg.c | 61 +++++++------------------------------------
14 files changed, 21 insertions(+), 120 deletions(-)
--
2.26.3
5 months, 3 weeks