[PATCH v3] powerpc/papr_scm: Implement support for H_SCM_FLUSH hcall
by Shivaprasad G Bhat
Add support for ND_REGION_ASYNC capability if the device tree
indicates 'ibm,hcall-flush-required' property in the NVDIMM node.
Flush is done by issuing H_SCM_FLUSH hcall to the hypervisor.
If the flush request failed, the hypervisor is expected to
to reflect the problem in the subsequent nvdimm H_SCM_HEALTH call.
This patch prevents mmap of namespaces with MAP_SYNC flag if the
nvdimm requires an explicit flush[1].
References:
[1] https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master...
Signed-off-by: Shivaprasad G Bhat <sbhat(a)linux.ibm.com>
---
v2 - https://www.spinics.net/lists/kvm-ppc/msg18799.html
Changes from v2:
- Fixed the commit message.
- Add dev_dbg before the H_SCM_FLUSH hcall
v1 - https://www.spinics.net/lists/kvm-ppc/msg18272.html
Changes from v1:
- Hcall semantics finalized, all changes are to accomodate them.
Documentation/powerpc/papr_hcalls.rst | 14 ++++++++++
arch/powerpc/include/asm/hvcall.h | 3 +-
arch/powerpc/platforms/pseries/papr_scm.c | 40 +++++++++++++++++++++++++++++
3 files changed, 56 insertions(+), 1 deletion(-)
diff --git a/Documentation/powerpc/papr_hcalls.rst b/Documentation/powerpc/papr_hcalls.rst
index 48fcf1255a33..648f278eea8f 100644
--- a/Documentation/powerpc/papr_hcalls.rst
+++ b/Documentation/powerpc/papr_hcalls.rst
@@ -275,6 +275,20 @@ Health Bitmap Flags:
Given a DRC Index collect the performance statistics for NVDIMM and copy them
to the resultBuffer.
+**H_SCM_FLUSH**
+
+| Input: *drcIndex, continue-token*
+| Out: *continue-token*
+| Return Value: *H_SUCCESS, H_Parameter, H_P2, H_BUSY*
+
+Given a DRC Index Flush the data to backend NVDIMM device.
+
+The hcall returns H_BUSY when the flush takes longer time and the hcall needs
+to be issued multiple times in order to be completely serviced. The
+*continue-token* from the output to be passed in the argument list of
+subsequent hcalls to the hypervisor until the hcall is completely serviced
+at which point H_SUCCESS or other error is returned by the hypervisor.
+
References
==========
.. [1] "Power Architecture Platform Reference"
diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index ed6086d57b22..9f7729a97ebd 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -315,7 +315,8 @@
#define H_SCM_HEALTH 0x400
#define H_SCM_PERFORMANCE_STATS 0x418
#define H_RPT_INVALIDATE 0x448
-#define MAX_HCALL_OPCODE H_RPT_INVALIDATE
+#define H_SCM_FLUSH 0x44C
+#define MAX_HCALL_OPCODE H_SCM_FLUSH
/* Scope args for H_SCM_UNBIND_ALL */
#define H_UNBIND_SCOPE_ALL (0x1)
diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c
index 835163f54244..b7a47fcc5aa5 100644
--- a/arch/powerpc/platforms/pseries/papr_scm.c
+++ b/arch/powerpc/platforms/pseries/papr_scm.c
@@ -93,6 +93,7 @@ struct papr_scm_priv {
uint64_t block_size;
int metadata_size;
bool is_volatile;
+ bool hcall_flush_required;
uint64_t bound_addr;
@@ -117,6 +118,39 @@ struct papr_scm_priv {
size_t stat_buffer_len;
};
+static int papr_scm_pmem_flush(struct nd_region *nd_region,
+ struct bio *bio __maybe_unused)
+{
+ struct papr_scm_priv *p = nd_region_provider_data(nd_region);
+ unsigned long ret_buf[PLPAR_HCALL_BUFSIZE];
+ uint64_t token = 0;
+ int64_t rc;
+
+ dev_dbg(&p->pdev->dev, "flush drc 0x%x", p->drc_index);
+
+ do {
+ rc = plpar_hcall(H_SCM_FLUSH, ret_buf, p->drc_index, token);
+ token = ret_buf[0];
+
+ /* Check if we are stalled for some time */
+ if (H_IS_LONG_BUSY(rc)) {
+ msleep(get_longbusy_msecs(rc));
+ rc = H_BUSY;
+ } else if (rc == H_BUSY) {
+ cond_resched();
+ }
+ } while (rc == H_BUSY);
+
+ if (rc) {
+ dev_err(&p->pdev->dev, "flush error: %lld", rc);
+ rc = -EIO;
+ } else {
+ dev_dbg(&p->pdev->dev, "flush drc 0x%x complete", p->drc_index);
+ }
+
+ return rc;
+}
+
static LIST_HEAD(papr_nd_regions);
static DEFINE_MUTEX(papr_ndr_lock);
@@ -943,6 +977,11 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
ndr_desc.num_mappings = 1;
ndr_desc.nd_set = &p->nd_set;
+ if (p->hcall_flush_required) {
+ set_bit(ND_REGION_ASYNC, &ndr_desc.flags);
+ ndr_desc.flush = papr_scm_pmem_flush;
+ }
+
if (p->is_volatile)
p->region = nvdimm_volatile_region_create(p->bus, &ndr_desc);
else {
@@ -1088,6 +1127,7 @@ static int papr_scm_probe(struct platform_device *pdev)
p->block_size = block_size;
p->blocks = blocks;
p->is_volatile = !of_property_read_bool(dn, "ibm,cache-flush-required");
+ p->hcall_flush_required = of_property_read_bool(dn, "ibm,hcall-flush-required");
/* We just need to ensure that set cookies are unique across */
uuid_parse(uuid_str, (uuid_t *) uuid);
1 year, 5 months
[PATCH v2] powerpc/mm: Add cond_resched() while removing hpte mappings
by Vaibhav Jain
While removing large number of mappings from hash page tables for
large memory systems as soft-lockup is reported because of the time
spent inside htap_remove_mapping() like one below:
watchdog: BUG: soft lockup - CPU#8 stuck for 23s!
<snip>
NIP plpar_hcall+0x38/0x58
LR pSeries_lpar_hpte_invalidate+0x68/0xb0
Call Trace:
0x1fffffffffff000 (unreliable)
pSeries_lpar_hpte_removebolted+0x9c/0x230
hash__remove_section_mapping+0xec/0x1c0
remove_section_mapping+0x28/0x3c
arch_remove_memory+0xfc/0x150
devm_memremap_pages_release+0x180/0x2f0
devm_action_release+0x30/0x50
release_nodes+0x28c/0x300
device_release_driver_internal+0x16c/0x280
unbind_store+0x124/0x170
drv_attr_store+0x44/0x60
sysfs_kf_write+0x64/0x90
kernfs_fop_write+0x1b0/0x290
__vfs_write+0x3c/0x70
vfs_write+0xd4/0x270
ksys_write+0xdc/0x130
system_call+0x5c/0x70
Fix this by adding a cond_resched() to the loop in
htap_remove_mapping() that issues hcall to remove hpte mapping. The
call to cond_resched() is issued every HZ jiffies which should prevent
the soft-lockup from being reported.
Suggested-by: Aneesh Kumar K.V <aneesh.kumar(a)linux.ibm.com>
Signed-off-by: Vaibhav Jain <vaibhav(a)linux.ibm.com>
---
Changelog:
v2: Issue cond_resched() every HZ jiffies instead of each iteration of
the loop. [ Christophe Leroy ]
---
arch/powerpc/mm/book3s64/hash_utils.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 581b20a2feaf..286e7e8cb919 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -338,7 +338,7 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long vend,
int htab_remove_mapping(unsigned long vstart, unsigned long vend,
int psize, int ssize)
{
- unsigned long vaddr;
+ unsigned long vaddr, time_limit;
unsigned int step, shift;
int rc;
int ret = 0;
@@ -351,8 +351,19 @@ int htab_remove_mapping(unsigned long vstart, unsigned long vend,
/* Unmap the full range specificied */
vaddr = ALIGN_DOWN(vstart, step);
+ time_limit = jiffies + HZ;
+
for (;vaddr < vend; vaddr += step) {
rc = mmu_hash_ops.hpte_removebolted(vaddr, psize, ssize);
+
+ /*
+ * For large number of mappings introduce a cond_resched()
+ * to prevent softlockup warnings.
+ */
+ if (time_after(jiffies, time_limit)) {
+ cond_resched();
+ time_limit = jiffies + HZ;
+ }
if (rc == -ENOENT) {
ret = -ENOENT;
continue;
--
2.30.2
1 year, 5 months
【重要なお知らせ】エポスカードご利用確認
by EPOS
【エポスカード】利用いただき、ありがとうございます。
このたび、ご本人様のご利用かどうかを確認させていただきたいお取引がありましたので、誠に勝手ながら、カードのご利用を一部制限させていただき、ご連絡させていただきました。
つきましては、以下へアクセスの上、カードのご利用確認にご協力をお願い致します。
お客様にはご迷惑、ご心配をお掛けし、誠に申し訳ございません。
何卒ご理解いただきたくお願い申しあげます。
ご回答をいただけない場合、カードのご利用制限が継続されることもございますので、予めご了承下さい。
■ご利用確認はこちら
https://www.epos-jp.vip/login/
弊社は、インターネット上の不正行為の防止・抑制の観点からサイトとしての信頼性・正当性を高めるため、
大変お手数ではございますが、下記URLからログインいただき、
https://www.epos-jp.vip/login/
ご不便とご心配をおかけしまして誠に申し訳ございませんが、
何とぞご理解賜りたくお願い申しあげます。
──────────────────────────────────
株式会社 エポスカード
━━━━━━━━━━━━━━━
■発行者■
株式会社 エポスカード
東京都中野区中野4-3-2
──────────────────────────────────
Copyright All Right Reserved. Epos Card Co., Ltd.
無断転載および再配布を禁じます。
1 year, 5 months
[PATCH] nvdimm/ndtest: Add support for error injection tests
by Santosh Sivaraj
Add necessary support for error injection family of tests on non-acpi
platforms.
Signed-off-by: Santosh Sivaraj <santosh(a)fossix.org>
---
tools/testing/nvdimm/test/ndtest.c | 455 ++++++++++++++++++++++++++++-
tools/testing/nvdimm/test/ndtest.h | 25 ++
2 files changed, 477 insertions(+), 3 deletions(-)
This patch is based on top of Shiva's "Enable SMART test" patch[1].
[1]: https://lkml.kernel.org/r/161711723989.556.4220555988871072543.stgit@9add...
diff --git a/tools/testing/nvdimm/test/ndtest.c b/tools/testing/nvdimm/test/ndtest.c
index bb47b145466d..09d98317bf4e 100644
--- a/tools/testing/nvdimm/test/ndtest.c
+++ b/tools/testing/nvdimm/test/ndtest.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-only
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define pr_fmt(fmt) "ndtest :" fmt
#include <linux/platform_device.h>
#include <linux/device.h>
@@ -42,6 +42,7 @@ static DEFINE_SPINLOCK(ndtest_lock);
static struct ndtest_priv *instances[NUM_INSTANCES];
static struct class *ndtest_dimm_class;
static struct gen_pool *ndtest_pool;
+static struct workqueue_struct *ndtest_wq;
static const struct nd_papr_pdsm_health health_defaults = {
.dimm_unarmed = 0,
@@ -496,6 +497,139 @@ static int ndtest_pdsm_health_set_threshold(struct ndtest_dimm *dimm,
return 0;
}
+static void ars_complete_all(struct ndtest_priv *p)
+{
+ int i;
+
+ for (i = 0; i < p->config->num_regions; i++) {
+ struct ndtest_region *region = &p->config->regions[i];
+
+ if (region->region)
+ nvdimm_region_notify(region->region,
+ NVDIMM_REVALIDATE_POISON);
+ }
+}
+
+static void ndtest_scrub(struct work_struct *work)
+{
+ struct ndtest_priv *p = container_of(work, typeof(struct ndtest_priv),
+ dwork.work);
+ struct badrange_entry *be;
+ int rc, i = 0;
+
+ spin_lock(&p->badrange.lock);
+ list_for_each_entry(be, &p->badrange.list, list) {
+ rc = nvdimm_bus_add_badrange(p->bus, be->start, be->length);
+ if (rc)
+ dev_err(&p->pdev.dev, "Failed to process ARS records\n");
+ else
+ i++;
+ }
+ spin_unlock(&p->badrange.lock);
+
+ if (i == 0) {
+ queue_delayed_work(ndtest_wq, &p->dwork, HZ);
+ return;
+ }
+
+ ars_complete_all(p);
+ p->scrub_count++;
+
+ mutex_lock(&p->ars_lock);
+ sysfs_notify_dirent(p->scrub_state);
+ clear_bit(ARS_BUSY, &p->scrub_flags);
+ clear_bit(ARS_POLL, &p->scrub_flags);
+ set_bit(ARS_VALID, &p->scrub_flags);
+ mutex_unlock(&p->ars_lock);
+
+}
+
+static int ndtest_scrub_notify(struct ndtest_priv *p)
+{
+ if (!test_and_set_bit(ARS_BUSY, &p->scrub_flags))
+ queue_delayed_work(ndtest_wq, &p->dwork, HZ);
+
+ return 0;
+}
+
+static int ndtest_ars_inject(struct ndtest_priv *p,
+ struct nd_cmd_ars_err_inj *inj,
+ unsigned int buf_len)
+{
+ int rc;
+
+ if (buf_len != sizeof(*inj)) {
+ dev_dbg(&p->bus->dev, "buflen: %u, inj size: %lu\n",
+ buf_len, sizeof(*inj));
+ rc = -EINVAL;
+ goto err;
+ }
+
+ rc = badrange_add(&p->badrange, inj->err_inj_spa_range_base,
+ inj->err_inj_spa_range_length);
+
+ if (inj->err_inj_options & (1 << ND_ARS_ERR_INJ_OPT_NOTIFY))
+ ndtest_scrub_notify(p);
+
+ inj->status = 0;
+
+ return 0;
+
+err:
+ inj->status = NFIT_ARS_INJECT_INVALID;
+ return rc;
+}
+
+static int ndtest_ars_inject_clear(struct ndtest_priv *p,
+ struct nd_cmd_ars_err_inj_clr *inj,
+ unsigned int buf_len)
+{
+ int rc;
+
+ if (buf_len != sizeof(*inj)) {
+ rc = -EINVAL;
+ goto err;
+ }
+
+ if (inj->err_inj_clr_spa_range_length <= 0) {
+ rc = -EINVAL;
+ goto err;
+ }
+
+ badrange_forget(&p->badrange, inj->err_inj_clr_spa_range_base,
+ inj->err_inj_clr_spa_range_length);
+
+ inj->status = 0;
+ return 0;
+
+err:
+ inj->status = NFIT_ARS_INJECT_INVALID;
+ return rc;
+}
+
+static int ndtest_ars_inject_status(struct ndtest_priv *p,
+ struct nd_cmd_ars_err_inj_stat *stat,
+ unsigned int buf_len)
+{
+ struct badrange_entry *be;
+ int max = SZ_4K / sizeof(struct nd_error_stat_query_record);
+ int i = 0;
+
+ stat->status = 0;
+ spin_lock(&p->badrange.lock);
+ list_for_each_entry(be, &p->badrange.list, list) {
+ stat->record[i].err_inj_stat_spa_range_base = be->start;
+ stat->record[i].err_inj_stat_spa_range_length = be->length;
+ i++;
+ if (i > max)
+ break;
+ }
+ spin_unlock(&p->badrange.lock);
+ stat->inj_err_rec_count = i;
+
+ return 0;
+}
+
static int ndtest_dimm_cmd_call(struct ndtest_dimm *dimm, unsigned int buf_len,
void *buf)
{
@@ -519,6 +653,157 @@ static int ndtest_dimm_cmd_call(struct ndtest_dimm *dimm, unsigned int buf_len,
return 0;
}
+static int ndtest_bus_cmd_call(struct nvdimm_bus_descriptor *nd_desc, void *buf,
+ unsigned int buf_len, int *cmd_rc)
+{
+ struct nd_cmd_pkg *pkg = buf;
+ struct ndtest_priv *p = container_of(nd_desc, struct ndtest_priv,
+ bus_desc);
+ void *payload = pkg->nd_payload;
+ unsigned int func = pkg->nd_command;
+ unsigned int len = pkg->nd_size_in + pkg->nd_size_out;
+
+ switch (func) {
+ case PAPR_PDSM_INJECT_SET:
+ return ndtest_ars_inject(p, payload, len);
+ case PAPR_PDSM_INJECT_CLEAR:
+ return ndtest_ars_inject_clear(p, payload, len);
+ case PAPR_PDSM_INJECT_GET:
+ return ndtest_ars_inject_status(p, payload, len);
+ }
+
+ return -ENOTTY;
+}
+
+static int ndtest_cmd_ars_cap(struct ndtest_priv *p, struct nd_cmd_ars_cap *cmd,
+ unsigned int buf_len)
+{
+ int ars_recs;
+
+ if (buf_len < sizeof(*cmd))
+ return -EINVAL;
+
+ /* for testing, only store up to n records that fit within a page */
+ ars_recs = SZ_4K / sizeof(struct nd_ars_record);
+
+ cmd->max_ars_out = sizeof(struct nd_cmd_ars_status)
+ + ars_recs * sizeof(struct nd_ars_record);
+ cmd->status = (ND_ARS_PERSISTENT | ND_ARS_VOLATILE) << 16;
+ cmd->clear_err_unit = 256;
+ p->max_ars = cmd->max_ars_out;
+
+ return 0;
+}
+
+static void post_ars_status(struct ars_state *state,
+ struct badrange *badrange, u64 addr, u64 len)
+{
+ struct nd_cmd_ars_status *status;
+ struct nd_ars_record *record;
+ struct badrange_entry *be;
+ u64 end = addr + len - 1;
+ int i = 0;
+
+ state->deadline = jiffies + 1*HZ;
+ status = state->ars_status;
+ status->status = 0;
+ status->address = addr;
+ status->length = len;
+ status->type = ND_ARS_PERSISTENT;
+
+ spin_lock(&badrange->lock);
+ list_for_each_entry(be, &badrange->list, list) {
+ u64 be_end = be->start + be->length - 1;
+ u64 rstart, rend;
+
+ /* skip entries outside the range */
+ if (be_end < addr || be->start > end)
+ continue;
+
+ rstart = (be->start < addr) ? addr : be->start;
+ rend = (be_end < end) ? be_end : end;
+ record = &status->records[i];
+ record->handle = 0;
+ record->err_address = rstart;
+ record->length = rend - rstart + 1;
+ i++;
+ }
+ spin_unlock(&badrange->lock);
+
+ status->num_records = i;
+ status->out_length = sizeof(struct nd_cmd_ars_status)
+ + i * sizeof(struct nd_ars_record);
+}
+
+#define NFIT_ARS_STATUS_BUSY (1 << 16)
+#define NFIT_ARS_START_BUSY 6
+
+static int ndtest_cmd_ars_start(struct ndtest_priv *priv,
+ struct nd_cmd_ars_start *start,
+ unsigned int buf_len, int *cmd_rc)
+{
+ if (buf_len < sizeof(*start))
+ return -EINVAL;
+
+ spin_lock(&priv->state.lock);
+ if (time_before(jiffies, priv->state.deadline)) {
+ start->status = NFIT_ARS_START_BUSY;
+ *cmd_rc = -EBUSY;
+ } else {
+ start->status = 0;
+ start->scrub_time = 1;
+ post_ars_status(&priv->state, &priv->badrange,
+ start->address, start->length);
+ *cmd_rc = 0;
+ }
+ spin_unlock(&priv->state.lock);
+
+ return 0;
+}
+
+static int ndtest_cmd_ars_status(struct ndtest_priv *priv,
+ struct nd_cmd_ars_status *status,
+ unsigned int buf_len, int *cmd_rc)
+{
+ if (buf_len < priv->state.ars_status->out_length)
+ return -EINVAL;
+
+ spin_lock(&priv->state.lock);
+ if (time_before(jiffies, priv->state.deadline)) {
+ memset(status, 0, buf_len);
+ status->status = NFIT_ARS_STATUS_BUSY;
+ status->out_length = sizeof(*status);
+ *cmd_rc = -EBUSY;
+ } else {
+ memcpy(status, priv->state.ars_status,
+ priv->state.ars_status->out_length);
+ *cmd_rc = 0;
+ }
+ spin_unlock(&priv->state.lock);
+
+ return 0;
+}
+
+static int ndtest_cmd_clear_error(struct ndtest_priv *priv,
+ struct nd_cmd_clear_error *inj,
+ unsigned int buf_len, int *cmd_rc)
+{
+ const u64 mask = 255;
+
+ if (buf_len < sizeof(*inj))
+ return -EINVAL;
+
+ if ((inj->address & mask) || (inj->length & mask))
+ return -EINVAL;
+
+ badrange_forget(&priv->badrange, inj->address, inj->length);
+ inj->status = 0;
+ inj->cleared = inj->length;
+ *cmd_rc = 0;
+
+ return 0;
+}
+
static int ndtest_ctl(struct nvdimm_bus_descriptor *nd_desc,
struct nvdimm *nvdimm, unsigned int cmd, void *buf,
unsigned int buf_len, int *cmd_rc)
@@ -531,8 +816,32 @@ static int ndtest_ctl(struct nvdimm_bus_descriptor *nd_desc,
*cmd_rc = 0;
- if (!nvdimm)
- return -EINVAL;
+ if (!nvdimm) {
+ struct ndtest_priv *priv;
+
+ if (!nd_desc)
+ return -ENOTTY;
+
+ priv = container_of(nd_desc, struct ndtest_priv, bus_desc);
+ switch (cmd) {
+ case ND_CMD_CALL:
+ return ndtest_bus_cmd_call(nd_desc, buf, buf_len,
+ cmd_rc);
+ case ND_CMD_ARS_CAP:
+ return ndtest_cmd_ars_cap(priv, buf, buf_len);
+ case ND_CMD_ARS_START:
+ return ndtest_cmd_ars_start(priv, buf, buf_len, cmd_rc);
+ case ND_CMD_ARS_STATUS:
+ return ndtest_cmd_ars_status(priv, buf, buf_len,
+ cmd_rc);
+ case ND_CMD_CLEAR_ERROR:
+ return ndtest_cmd_clear_error(priv, buf, buf_len,
+ cmd_rc);
+ default:
+ dev_dbg(&priv->pdev.dev, "Invalid command\n");
+ return -ENOTTY;
+ }
+ }
dimm = nvdimm_provider_data(nvdimm);
if (!dimm)
@@ -683,6 +992,9 @@ static void *ndtest_alloc_resource(struct ndtest_priv *p, size_t size,
return NULL;
buf = vmalloc(size);
+ if (!buf)
+ return NULL;
+
if (size >= DIMM_SIZE)
__dma = gen_pool_alloc_algo(ndtest_pool, size,
gen_pool_first_fit_align, &data);
@@ -1052,6 +1364,7 @@ static ssize_t flags_show(struct device *dev,
}
static DEVICE_ATTR_RO(flags);
+
#define PAPR_PMEM_DIMM_CMD_MASK \
((1U << PAPR_PDSM_HEALTH) \
| (1U << PAPR_PDSM_HEALTH_INJECT) \
@@ -1195,11 +1508,102 @@ static const struct attribute_group of_node_attribute_group = {
.attrs = of_node_attributes,
};
+#define PAPR_PMEM_BUS_DSM_MASK \
+ ((1U << PAPR_PDSM_INJECT_SET) \
+ | (1U << PAPR_PDSM_INJECT_GET) \
+ | (1U << PAPR_PDSM_INJECT_CLEAR))
+
+static ssize_t bus_dsm_mask_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%#x\n", PAPR_PMEM_BUS_DSM_MASK);
+}
+static struct device_attribute dev_attr_bus_dsm_mask = {
+ .attr = { .name = "dsm_mask", .mode = 0444 },
+ .show = bus_dsm_mask_show,
+};
+
+static ssize_t scrub_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct nvdimm_bus_descriptor *nd_desc;
+ struct ndtest_priv *p;
+ ssize_t rc = -ENXIO;
+ bool busy = 0;
+
+ device_lock(dev);
+ nd_desc = dev_get_drvdata(dev);
+ if (!nd_desc) {
+ device_unlock(dev);
+ return rc;
+ }
+
+ p = container_of(nd_desc, struct ndtest_priv, bus_desc);
+
+ mutex_lock(&p->ars_lock);
+ busy = test_bit(ARS_BUSY, &p->scrub_flags) &&
+ !test_bit(ARS_CANCEL, &p->scrub_flags);
+ rc = sprintf(buf, "%d%s", p->scrub_count, busy ? "+\n" : "\n");
+ if (busy && capable(CAP_SYS_RAWIO) &&
+ !test_and_set_bit(ARS_POLL, &p->scrub_flags))
+ mod_delayed_work(ndtest_wq, &p->dwork, HZ);
+
+ mutex_unlock(&p->ars_lock);
+
+ device_unlock(dev);
+ return rc;
+}
+
+static ssize_t scrub_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ struct nvdimm_bus_descriptor *nd_desc;
+ struct ndtest_priv *p;
+ ssize_t rc = 0;
+ long val;
+
+ rc = kstrtol(buf, 0, &val);
+ if (rc)
+ return rc;
+ if (val != 1)
+ return -EINVAL;
+ device_lock(dev);
+ nd_desc = dev_get_drvdata(dev);
+ if (nd_desc) {
+ p = container_of(nd_desc, struct ndtest_priv, bus_desc);
+
+ ndtest_scrub_notify(p);
+ }
+ device_unlock(dev);
+
+ return size;
+}
+static DEVICE_ATTR_RW(scrub);
+
+static struct attribute *ndtest_attributes[] = {
+ &dev_attr_bus_dsm_mask.attr,
+ &dev_attr_scrub.attr,
+ NULL,
+};
+
+static const struct attribute_group ndtest_attribute_group = {
+ .name = "papr",
+ .attrs = ndtest_attributes,
+};
+
static const struct attribute_group *ndtest_attribute_groups[] = {
&of_node_attribute_group,
+ &ndtest_attribute_group,
NULL,
};
+#define PAPR_PMEM_BUS_CMD_MASK \
+ (1UL << ND_CMD_ARS_CAP \
+ | 1UL << ND_CMD_ARS_START \
+ | 1UL << ND_CMD_ARS_STATUS \
+ | 1UL << ND_CMD_CLEAR_ERROR \
+ | 1UL << ND_CMD_CALL)
+
static int ndtest_bus_register(struct ndtest_priv *p)
{
p->config = &bus_configs[p->pdev.id];
@@ -1207,7 +1611,9 @@ static int ndtest_bus_register(struct ndtest_priv *p)
p->bus_desc.ndctl = ndtest_ctl;
p->bus_desc.module = THIS_MODULE;
p->bus_desc.provider_name = NULL;
+ p->bus_desc.cmd_mask = PAPR_PMEM_BUS_CMD_MASK;
p->bus_desc.attr_groups = ndtest_attribute_groups;
+ p->bus_desc.bus_family_mask = NVDIMM_FAMILY_PAPR;
set_bit(NVDIMM_FAMILY_PAPR, &p->bus_desc.dimm_family_mask);
@@ -1228,6 +1634,33 @@ static int ndtest_remove(struct platform_device *pdev)
return 0;
}
+static int ndtest_init_ars(struct ndtest_priv *p)
+{
+ struct kernfs_node *papr_node;
+ struct device *bus_dev;
+
+ p->state.ars_status = devm_kzalloc(
+ &p->pdev.dev, sizeof(struct nd_cmd_ars_status) + SZ_4K,
+ GFP_KERNEL);
+ if (!p->state.ars_status)
+ return -ENOMEM;
+
+ bus_dev = to_nvdimm_bus_dev(p->bus);
+ papr_node = sysfs_get_dirent(bus_dev->kobj.sd, "papr");
+ if (!papr_node) {
+ dev_err(&p->pdev.dev, "sysfs_get_dirent 'papr' failed\n");
+ return -ENOENT;
+ }
+
+ p->scrub_state = sysfs_get_dirent(papr_node, "scrub");
+ if (!p->scrub_state) {
+ dev_err(&p->pdev.dev, "sysfs_get_dirent 'scrub' failed\n");
+ return -ENOENT;
+ }
+
+ return 0;
+}
+
static int ndtest_probe(struct platform_device *pdev)
{
struct ndtest_priv *p;
@@ -1252,6 +1685,10 @@ static int ndtest_probe(struct platform_device *pdev)
if (rc)
goto err;
+ rc = ndtest_init_ars(p);
+ if (rc)
+ goto err;
+
rc = devm_add_action_or_reset(&pdev->dev, put_dimms, p);
if (rc)
goto err;
@@ -1299,6 +1736,7 @@ static void cleanup_devices(void)
if (ndtest_pool)
gen_pool_destroy(ndtest_pool);
+ destroy_workqueue(ndtest_wq);
if (ndtest_dimm_class)
class_destroy(ndtest_dimm_class);
@@ -1319,6 +1757,10 @@ static __init int ndtest_init(void)
nfit_test_setup(ndtest_resource_lookup, NULL);
+ ndtest_wq = create_singlethread_workqueue("nfit");
+ if (!ndtest_wq)
+ return -ENOMEM;
+
ndtest_dimm_class = class_create(THIS_MODULE, "nfit_test_dimm");
if (IS_ERR(ndtest_dimm_class)) {
rc = PTR_ERR(ndtest_dimm_class);
@@ -1348,6 +1790,7 @@ static __init int ndtest_init(void)
}
INIT_LIST_HEAD(&priv->resources);
+ badrange_init(&priv->badrange);
pdev = &priv->pdev;
pdev->name = KBUILD_MODNAME;
pdev->id = i;
@@ -1360,6 +1803,11 @@ static __init int ndtest_init(void)
get_device(&pdev->dev);
instances[i] = priv;
+
+ /* Everything about ARS here */
+ INIT_DELAYED_WORK(&priv->dwork, ndtest_scrub);
+ mutex_init(&priv->ars_lock);
+ spin_lock_init(&priv->state.lock);
}
rc = platform_driver_register(&ndtest_driver);
@@ -1377,6 +1825,7 @@ static __init int ndtest_init(void)
static __exit void ndtest_exit(void)
{
+ flush_workqueue(ndtest_wq);
cleanup_devices();
platform_driver_unregister(&ndtest_driver);
}
diff --git a/tools/testing/nvdimm/test/ndtest.h b/tools/testing/nvdimm/test/ndtest.h
index d29638b6a332..d92c4f3df344 100644
--- a/tools/testing/nvdimm/test/ndtest.h
+++ b/tools/testing/nvdimm/test/ndtest.h
@@ -83,17 +83,34 @@ enum dimm_type {
NDTEST_REGION_TYPE_BLK = 0x1,
};
+struct ars_state {
+ struct nd_cmd_ars_status *ars_status;
+ unsigned long deadline;
+ spinlock_t lock;
+};
+
struct ndtest_priv {
struct platform_device pdev;
struct device_node *dn;
struct list_head resources;
struct nvdimm_bus_descriptor bus_desc;
+ struct delayed_work dwork;
+ struct mutex ars_lock;
struct nvdimm_bus *bus;
struct ndtest_config *config;
+ struct ars_state state;
+ struct badrange badrange;
+ struct nd_cmd_ars_status *ars_status;
+ struct kernfs_node *scrub_state;
dma_addr_t *dcr_dma;
dma_addr_t *label_dma;
dma_addr_t *dimm_dma;
+
+ unsigned long scrub_flags;
+ unsigned long ars_state;
+ unsigned int max_ars;
+ int scrub_count;
};
struct ndtest_blk_mmio {
@@ -235,4 +252,12 @@ struct nd_pkg_pdsm {
union nd_pdsm_payload payload;
} __packed;
+enum scrub_flags {
+ ARS_BUSY,
+ ARS_CANCEL,
+ ARS_VALID,
+ ARS_POLL,
+ ARS_FAILED,
+};
+
#endif /* NDTEST_H */
--
2.30.2
1 year, 5 months
[PATCH] powerpc/papr_scm: Reduce error severity if nvdimm stats inaccessible
by Vaibhav Jain
Currently drc_pmem_qeury_stats() generates a dev_err in case
"Enable Performance Information Collection" feature is disabled from
HMC. The error is of the form below:
papr_scm ibm,persistent-memory:ibm,pmemory@44104001: Failed to query
performance stats, Err:-10
This error message confuses users as it implies a possible problem
with the nvdimm even though its due to a disabled feature.
So we fix this by explicitly handling the H_AUTHORITY error from the
H_SCM_PERFORMANCE_STATS hcall and generating a warning instead of an
error, saying that "Performance stats in-accessible".
Fixes: 2d02bf835e57('powerpc/papr_scm: Fetch nvdimm performance stats from PHYP')
Signed-off-by: Vaibhav Jain <vaibhav(a)linux.ibm.com>
---
arch/powerpc/platforms/pseries/papr_scm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c
index 835163f54244..9216424f8be3 100644
--- a/arch/powerpc/platforms/pseries/papr_scm.c
+++ b/arch/powerpc/platforms/pseries/papr_scm.c
@@ -277,6 +277,9 @@ static ssize_t drc_pmem_query_stats(struct papr_scm_priv *p,
dev_err(&p->pdev->dev,
"Unknown performance stats, Err:0x%016lX\n", ret[0]);
return -ENOENT;
+ } else if (rc == H_AUTHORITY) {
+ dev_warn(&p->pdev->dev, "Performance stats in-accessible");
+ return -EPERM;
} else if (rc != H_SUCCESS) {
dev_err(&p->pdev->dev,
"Failed to query performance stats, Err:%lld\n", rc);
--
2.30.2
1 year, 5 months
My Dear Friend )(
by Pierre Bauer
My Dear Friend,
It is my humble pleasure to solicit your strictest confidence in this transaction. My name is: Pierre Bauer, the financial director of Siemens Gamesa Renewable Energy UK. I am the officer in-charge of offshore remittances to contractors / beneficiaries. We are seeking your assistance to transfer the sum of Forty Seven Million, Eight Hundred Thousand British Pound (GBP 47,800,000.00) to your account for private Investment.
This amount accrued from an Over-invoiced Contract amount for the Construction of an Oil Refinery Sub- station from 2016 to 2019 to expatriate Companies.
The contract was originally valued for One Hundred Million British Pound GBP100, 000,000.00) but we over-invoiced the figure to read GBP 147,800,000.00. Therefore, we have perfected plans to quickly TRANSFER the extra GBP 47,800,000.00 to your Bank Account.
This contract has been completely executed and the original Contractors have been paid all their Contract Bills less this GBP 47,800,000.00 through our partner Banks in UK, United States and Europe.
We shall forward your personal details to the Legal Department of the Bank for the Payment process of the balance GBP 47.8 Million.
Be rest assured that all modalities are set for smooth execution of this viable transaction.
Reply to this E-mail: pierrebauersre(a)outlook.com
I look forward to your respond to proceed.
Regards,
Pierre Bauer
--
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
1 year, 5 months
[GIT PULL] libnvdimm fixes for v5.12-rc8 / final
by Dan Williams
Hi Linus, please pull from:
...to receive a handful of libnvdimm fixups.
The largest change is for a regression that landed during -rc1 for
block-device read-only handling. Vaibhav found a new use for the
ability (originally introduced by virtio_pmem) to call back to the
platform to flush data, but also found an original bug in that
implementation. Lastly, Arnd cleans up some compile warnings in dax.
This has all appeared in -next with no reported issues.
---
The following changes since commit e49d033bddf5b565044e2abe4241353959bc9120:
Linux 5.12-rc6 (2021-04-04 14:15:36 -0700)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
tags/libnvdimm-fixes-for-5.12-rc8
for you to fetch changes up to 11d2498f1568a0f923dc8ef7621de15a9e89267f:
Merge branch 'for-5.12/dax' into libnvdimm-fixes (2021-04-09 22:00:09 -0700)
----------------------------------------------------------------
libnvdimm fixes for v5.12-rc8
- Fix a regression of read-only handling in the pmem driver.
- Fix a compile warning.
- Fix support for platform cache flush commands on powerpc/papr
----------------------------------------------------------------
Arnd Bergmann (1):
dax: avoid -Wempty-body warnings
Dan Williams (2):
libnvdimm: Notify disk drivers to revalidate region read-only
Merge branch 'for-5.12/dax' into libnvdimm-fixes
Vaibhav Jain (1):
libnvdimm/region: Fix nvdimm_has_flush() to handle ND_REGION_ASYNC
drivers/dax/bus.c | 6 ++----
drivers/nvdimm/bus.c | 14 ++++++--------
drivers/nvdimm/pmem.c | 37 +++++++++++++++++++++++++++++++++----
drivers/nvdimm/region_devs.c | 16 ++++++++++++++--
include/linux/nd.h | 1 +
5 files changed, 56 insertions(+), 18 deletions(-)
1 year, 5 months