re: social traffic for my website
by PAUL
hi
Cheap Facebook Groups Traffic for your website
For Full Details please read the attached .html file
Regards
PAUL
Unsubscribe option is available on the footer of our website
5 years, 4 months
Notice to Appear in Court
by County Court
Notice to Appear,
This is to inform you to appear in the Court on the August 29 for your case hearing.
You are kindly asked to prepare and bring the documents relating to the case to Court on the specified date.
Note: If you do not come, the case will be heard in your absence.
The copy of Court Notice is attached to this email.
Sincerely,
Larry Daly,
Court Secretary.
5 years, 4 months
linux-nvdimm:你的大客户都在干什么啊?
by 雍丽菲
生产经理、主管实战管理技能提升训练
举办时间:2015年09月03-04日上海 09月17-18日深圳
费 用:4800元买一送一,单人3200元(含课程、讲义、午餐、税费、茶点、资料等费用)
适合对象:生产制造企业厂长、经理、车间主任、主管、班组长、工段长、领班等管理人员及储备干部
联系人: 0512-6870.0652 (0)153-06200-569
===========================================================================================
课程背景:
企业面临的外部挑战:
移动互联网时代:客户和消费者主导的时代,企业需要更强地适应客户的能力
客户越来越挑剔,交期要快,质量要好,价格要低
订单批量越来越小,向个性化定制过度
材料,人力成本持续上升
订单个性化,不平衡将成为一种常态,缺乏灵活快速响应能力的企业将逐步失去竞争力并被淘汰出局
企业面临的内部挑战
柔性化的生产系统没有建立起来或不尽人意,缺乏灵活快速的适应能力
生产运作系统不顺畅,不平衡,甚至混乱的现象仍然存在
效率,品质,成本等问题仍然大量存在,管理粗放性有余,精细化不足
人员难招难管难留,
人员积极性不高,归属感不强,状态不好
企业内部的干部问题
一些干部的管理意识不够,责任心不足,管理习惯不良,自由散漫,不严谨,不科学的习气仍然存在
一些干部的管理思路和方法仍然欠缺,既缺乏系统的设计能力,又缺乏具体问题的处理和控制能力
一些干部在人员培养,管理,沟通,激励等方面思路不宽,办法不多,技巧不足
===========================================================================================
培训收益:
增强管理认知,明确管理责任,管理角色,管理意识
了解平衡管理的系统模型
了解精细化柔性化的管理思路
掌握20个实用管理方法
掌握计划,控制,沟通,激励和人员管理问题解决的思路和方法
===========================================================================================
课程方式:
行动学习:用学员企业的实际问题,通过案例经验分享,集体研讨,方法演练的方式,现场解决学员的实际问题
===========================================================================================
课程带来的培训价值:
增强管理认知,明确管理责任,管理角色,管理意识
了解平衡管理的系统模型
了解精细化柔性化的管理思路
掌握20个实用管理方法
掌握计划,控制,沟通,激励和人员管理问题解决的思路和方法
课程方式:
行动学习:用学员企业的实际问题,通过案例经验分享,集体研讨,方法演练的方式,现场解决学员的实际问题
===========================================================================================
课程大纲:
一、主题:管理认知
1.管理的基本职责
2.优秀管理者的角色担当
3.如何平衡管理和业务工作?
4.优秀管理者应该具备的管理意识
二、主题:生产链管理
1.问题征集:学员在生产链的计划和运作过程存在的问题表现
2.问题研讨:如何针对问题表现,改善和解决
3.经验分享:生产计划链管理的成功案例分析
4.行动措施:动作化方法总结
5.参考方法:
平衡计划法
备料控制法
异常管控法
看板管理法
早会工作法
三、主题:物料链管理
1.问题征集:学员在物料链管理中存在的问题表现
2.问题研讨:针对问题表现,改善和解决
3.经验分享:物料管理的成功案例分析
4.行动措施:动作化方法总结
5.参考方法:
采购控制法
备料控制法
稽核控制法
四、主题:质量管理
1.问题征集:学员在产品质量管理系统中存在的问题表现
2.问题研讨:针对问题表现,改善和解决
3.经验分享:质量管理的成功案例分析
4.行动措施:动作化方法总结
5.参考方法:
攻关组织法
品质例会法
稽核控制法
五、主题:现场管理
1.问题征集:学员在现场物料,设备,能源,质量,工具,工艺,5S管理中存在的问题表现
2.问题研讨:针对问题表现,改善和解决
3.经验分享:现场管理成功案例分析
4.行动措施:动作化方法总结
5.参考方法:
案例分析法
看板管理法
早会工作法
三要素控制法
工作Pk法
六、主题:培育下属
1.问题征集:学员在班组长和骨干培养,OJT方面存在的问题表现
2.问题研讨:针对问题表现,如何改善和解决?
3.经验分享:培育下属的成功经验分享
4.行动措施:动作化方法总结
5.参考方法:
成长规划法
七、主题:跨部门工作协调
1.问题征集:学员在跨部门工作协调方面存在的问题表现
2.问题研讨:针对问题表现,如何改善和解决?
3.经验分享:跨部门协调沟通的成功经验分享
4.行动措施:动作化方法总结
5.参考方法:跨部门协调法
八、主题: 员工激励
1.问题征集:学员在员工激励方面存在的问题表现
2.问题研讨:针对问题表现,如何改善和解决?
3.经验分享:员工激励的成功经验分享
如何帮助员工增加收入?
如何搞好后勤保障工作?
如何通过文化娱乐活动激励员工?
如何通过工作竞争型激励员工?
如何通过改善上下级关系留住员工?
4.行动措施:动作化方法总结
5.参考方法:
工作Pk法
12剧场法
上司激励法
九、主题:员工管理
1.问题征集:学员在员工管理方面存在的问题表现
90后员工怎么管?
技术型员工怎么管理?
老员工怎么管?
关系型员工怎么管?
2.问题研讨:针对问题表现,如何改善和解决?
3.经验分享:员工管理的成功经验分享
以法管事
以情动人
以理服人
以德带人
以势导人
4.行动措施:动作化方法总结
si1fzgol0hann5lwnjdisw2ybqpjym
5 years, 4 months
[PATCH] nfit, nd_blk: BLK status register is only 32 bits
by Ross Zwisler
Only read 32 bits for the BLK status register in read_blk_stat().
The format and size of this register is defined in the
"NVDIMM Driver Writer's guide":
http://pmem.io/documents/NVDIMM_Driver_Writers_Guide.pdf
Signed-off-by: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Reported-by: Nicholas Moulin <nicholas.w.moulin(a)linux.intel.com>
---
drivers/acpi/nfit.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/acpi/nfit.c b/drivers/acpi/nfit.c
index 7c2638f..8689ee1 100644
--- a/drivers/acpi/nfit.c
+++ b/drivers/acpi/nfit.c
@@ -1009,7 +1009,7 @@ static void wmb_blk(struct nfit_blk *nfit_blk)
wmb_pmem();
}
-static u64 read_blk_stat(struct nfit_blk *nfit_blk, unsigned int bw)
+static u32 read_blk_stat(struct nfit_blk *nfit_blk, unsigned int bw)
{
struct nfit_blk_mmio *mmio = &nfit_blk->mmio[DCR];
u64 offset = nfit_blk->stat_offset + mmio->size * bw;
@@ -1017,7 +1017,7 @@ static u64 read_blk_stat(struct nfit_blk *nfit_blk, unsigned int bw)
if (mmio->num_lines)
offset = to_interleave_offset(offset, mmio);
- return readq(mmio->base + offset);
+ return readl(mmio->base + offset);
}
static void write_blk_ctl(struct nfit_blk *nfit_blk, unsigned int bw,
--
2.1.0
5 years, 4 months
re: social traffic for my website
by DANIEL
hi
Cheap Facebook Groups Traffic for your website
For Full Details please read the attached .html file
Regards
DANIEL
Unsubscribe option is available on the footer of our website
5 years, 4 months
[PATCH v2] nd_blk: add support for "read flush" DSM flag
by Ross Zwisler
Add support for the "read flush" _DSM flag, as outlined in the DSM spec:
http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf
This flag tells the ND BLK driver that it needs to flush the cache lines
associated with the aperture after the aperture is moved but before any
new data is read. This ensures that any stale cache lines from the
previous contents of the aperture will be discarded from the processor
cache, and the new data will be read properly from the DIMM. We know
that the cache lines are clean and will be discarded without any
writeback because either a) the previous aperture operation was a read,
and we never modified the contents of the aperture, or b) the previous
aperture operation was a write and we must have written back the dirtied
contents of the aperture to the DIMM before the I/O was completed.
By supporting the "read flush" flag we can also change the ND BLK
aperture mapping from write-combining to write-back via memremap().
In order to add support for the "read flush" flag I needed to add a
generic routine to invalidate cache lines, mmio_flush_range(). This is
protected by the ARCH_HAS_MMIO_FLUSH Kconfig variable, and is currently
only supported on x86.
Signed-off-by: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Cc: Dan Williams <dan.j.williams(a)intel.com>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/cacheflush.h | 2 ++
arch/x86/include/asm/io.h | 2 --
arch/x86/include/asm/pmem.h | 2 ++
drivers/acpi/Kconfig | 1 +
drivers/acpi/nfit.c | 55 ++++++++++++++++++++++-----------------
drivers/acpi/nfit.h | 16 ++++++++----
lib/Kconfig | 3 +++
tools/testing/nvdimm/Kbuild | 2 ++
tools/testing/nvdimm/test/iomap.c | 30 +++++++++++++++++++--
tools/testing/nvdimm/test/nfit.c | 10 ++++---
11 files changed, 88 insertions(+), 36 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 76c6115..03ab612 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -28,6 +28,7 @@ config X86
select ARCH_HAS_FAST_MULTIPLIER
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_PMEM_API
+ select ARCH_HAS_MMIO_FLUSH
select ARCH_HAS_SG_CHAIN
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index 471418a..e63aa38 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -89,6 +89,8 @@ int set_pages_rw(struct page *page, int numpages);
void clflush_cache_range(void *addr, unsigned int size);
+#define mmio_flush_range(addr, size) clflush_cache_range(addr, size)
+
#ifdef CONFIG_DEBUG_RODATA
void mark_rodata_ro(void);
extern const int rodata_test_data;
diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index d241fbd..83ec9b1 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -248,8 +248,6 @@ static inline void flush_write_buffers(void)
#endif
}
-#define ARCH_MEMREMAP_PMEM MEMREMAP_WB
-
#endif /* __KERNEL__ */
extern void native_io_delay(void);
diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h
index a3a0df6..bb026c5 100644
--- a/arch/x86/include/asm/pmem.h
+++ b/arch/x86/include/asm/pmem.h
@@ -18,6 +18,8 @@
#include <asm/cpufeature.h>
#include <asm/special_insns.h>
+#define ARCH_MEMREMAP_PMEM MEMREMAP_WB
+
#ifdef CONFIG_ARCH_HAS_PMEM_API
/**
* arch_memcpy_to_pmem - copy data to persistent memory
diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
index 114cf48..4baeb85 100644
--- a/drivers/acpi/Kconfig
+++ b/drivers/acpi/Kconfig
@@ -410,6 +410,7 @@ config ACPI_NFIT
tristate "ACPI NVDIMM Firmware Interface Table (NFIT)"
depends on PHYS_ADDR_T_64BIT
depends on BLK_DEV
+ depends on ARCH_HAS_MMIO_FLUSH
select LIBNVDIMM
help
Infrastructure to probe ACPI 6 compliant platforms for
diff --git a/drivers/acpi/nfit.c b/drivers/acpi/nfit.c
index 7c2638f..56fff01 100644
--- a/drivers/acpi/nfit.c
+++ b/drivers/acpi/nfit.c
@@ -1017,7 +1017,7 @@ static u64 read_blk_stat(struct nfit_blk *nfit_blk, unsigned int bw)
if (mmio->num_lines)
offset = to_interleave_offset(offset, mmio);
- return readq(mmio->base + offset);
+ return readq(mmio->addr.base + offset);
}
static void write_blk_ctl(struct nfit_blk *nfit_blk, unsigned int bw,
@@ -1042,11 +1042,11 @@ static void write_blk_ctl(struct nfit_blk *nfit_blk, unsigned int bw,
if (mmio->num_lines)
offset = to_interleave_offset(offset, mmio);
- writeq(cmd, mmio->base + offset);
+ writeq(cmd, mmio->addr.base + offset);
wmb_blk(nfit_blk);
if (nfit_blk->dimm_flags & ND_BLK_DCR_LATCH)
- readq(mmio->base + offset);
+ readq(mmio->addr.base + offset);
}
static int acpi_nfit_blk_single_io(struct nfit_blk *nfit_blk,
@@ -1078,11 +1078,16 @@ static int acpi_nfit_blk_single_io(struct nfit_blk *nfit_blk,
}
if (rw)
- memcpy_to_pmem(mmio->aperture + offset,
+ memcpy_to_pmem(mmio->addr.aperture + offset,
iobuf + copied, c);
- else
+ else {
+ if (nfit_blk->dimm_flags & ND_BLK_READ_FLUSH)
+ mmio_flush_range((void __force *)
+ mmio->addr.aperture + offset, c);
+
memcpy_from_pmem(iobuf + copied,
- mmio->aperture + offset, c);
+ mmio->addr.aperture + offset, c);
+ }
copied += c;
len -= c;
@@ -1129,7 +1134,10 @@ static void nfit_spa_mapping_release(struct kref *kref)
WARN_ON(!mutex_is_locked(&acpi_desc->spa_map_mutex));
dev_dbg(acpi_desc->dev, "%s: SPA%d\n", __func__, spa->range_index);
- iounmap(spa_map->iomem);
+ if (spa_map->type == SPA_MAP_APERTURE)
+ memunmap((void __force *)spa_map->addr.aperture);
+ else
+ iounmap(spa_map->addr.base);
release_mem_region(spa->address, spa->length);
list_del(&spa_map->list);
kfree(spa_map);
@@ -1175,7 +1183,7 @@ static void __iomem *__nfit_spa_map(struct acpi_nfit_desc *acpi_desc,
spa_map = find_spa_mapping(acpi_desc, spa);
if (spa_map) {
kref_get(&spa_map->kref);
- return spa_map->iomem;
+ return spa_map->addr.base;
}
spa_map = kzalloc(sizeof(*spa_map), GFP_KERNEL);
@@ -1191,20 +1199,19 @@ static void __iomem *__nfit_spa_map(struct acpi_nfit_desc *acpi_desc,
if (!res)
goto err_mem;
- if (type == SPA_MAP_APERTURE) {
- /*
- * TODO: memremap_pmem() support, but that requires cache
- * flushing when the aperture is moved.
- */
- spa_map->iomem = ioremap_wc(start, n);
- } else
- spa_map->iomem = ioremap_nocache(start, n);
+ spa_map->type = type;
+ if (type == SPA_MAP_APERTURE)
+ spa_map->addr.aperture = (void __pmem *)memremap(start, n,
+ ARCH_MEMREMAP_PMEM);
+ else
+ spa_map->addr.base = ioremap_nocache(start, n);
+
- if (!spa_map->iomem)
+ if (!spa_map->addr.base)
goto err_map;
list_add_tail(&spa_map->list, &acpi_desc->spa_maps);
- return spa_map->iomem;
+ return spa_map->addr.base;
err_map:
release_mem_region(start, n);
@@ -1267,7 +1274,7 @@ static int acpi_nfit_blk_get_flags(struct nvdimm_bus_descriptor *nd_desc,
nfit_blk->dimm_flags = flags.flags;
else if (rc == -ENOTTY) {
/* fall back to a conservative default */
- nfit_blk->dimm_flags = ND_BLK_DCR_LATCH;
+ nfit_blk->dimm_flags = ND_BLK_DCR_LATCH | ND_BLK_READ_FLUSH;
rc = 0;
} else
rc = -ENXIO;
@@ -1307,9 +1314,9 @@ static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus,
/* map block aperture memory */
nfit_blk->bdw_offset = nfit_mem->bdw->offset;
mmio = &nfit_blk->mmio[BDW];
- mmio->base = nfit_spa_map(acpi_desc, nfit_mem->spa_bdw,
+ mmio->addr.base = nfit_spa_map(acpi_desc, nfit_mem->spa_bdw,
SPA_MAP_APERTURE);
- if (!mmio->base) {
+ if (!mmio->addr.base) {
dev_dbg(dev, "%s: %s failed to map bdw\n", __func__,
nvdimm_name(nvdimm));
return -ENOMEM;
@@ -1330,9 +1337,9 @@ static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus,
nfit_blk->cmd_offset = nfit_mem->dcr->command_offset;
nfit_blk->stat_offset = nfit_mem->dcr->status_offset;
mmio = &nfit_blk->mmio[DCR];
- mmio->base = nfit_spa_map(acpi_desc, nfit_mem->spa_dcr,
+ mmio->addr.base = nfit_spa_map(acpi_desc, nfit_mem->spa_dcr,
SPA_MAP_CONTROL);
- if (!mmio->base) {
+ if (!mmio->addr.base) {
dev_dbg(dev, "%s: %s failed to map dcr\n", __func__,
nvdimm_name(nvdimm));
return -ENOMEM;
@@ -1399,7 +1406,7 @@ static void acpi_nfit_blk_region_disable(struct nvdimm_bus *nvdimm_bus,
for (i = 0; i < 2; i++) {
struct nfit_blk_mmio *mmio = &nfit_blk->mmio[i];
- if (mmio->base)
+ if (mmio->addr.base)
nfit_spa_unmap(acpi_desc, mmio->spa);
}
nd_blk_region_set_provider_data(ndbr, NULL);
diff --git a/drivers/acpi/nfit.h b/drivers/acpi/nfit.h
index f2c2bb7..7e74015 100644
--- a/drivers/acpi/nfit.h
+++ b/drivers/acpi/nfit.h
@@ -41,6 +41,7 @@ enum nfit_uuids {
};
enum {
+ ND_BLK_READ_FLUSH = 1,
ND_BLK_DCR_LATCH = 2,
};
@@ -117,12 +118,16 @@ enum nd_blk_mmio_selector {
DCR,
};
+struct nd_blk_addr {
+ union {
+ void __iomem *base;
+ void __pmem *aperture;
+ };
+};
+
struct nfit_blk {
struct nfit_blk_mmio {
- union {
- void __iomem *base;
- void __pmem *aperture;
- };
+ struct nd_blk_addr addr;
u64 size;
u64 base_offset;
u32 line_size;
@@ -149,7 +154,8 @@ struct nfit_spa_mapping {
struct acpi_nfit_system_address *spa;
struct list_head list;
struct kref kref;
- void __iomem *iomem;
+ enum spa_map_type type;
+ struct nd_blk_addr addr;
};
static inline struct nfit_spa_mapping *to_spa_map(struct kref *kref)
diff --git a/lib/Kconfig b/lib/Kconfig
index 3a2ef67..a938a39 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -531,4 +531,7 @@ config ARCH_HAS_SG_CHAIN
config ARCH_HAS_PMEM_API
bool
+config ARCH_HAS_MMIO_FLUSH
+ bool
+
endmenu
diff --git a/tools/testing/nvdimm/Kbuild b/tools/testing/nvdimm/Kbuild
index e667579..98f2881 100644
--- a/tools/testing/nvdimm/Kbuild
+++ b/tools/testing/nvdimm/Kbuild
@@ -1,8 +1,10 @@
ldflags-y += --wrap=ioremap_wc
+ldflags-y += --wrap=memremap
ldflags-y += --wrap=devm_ioremap_nocache
ldflags-y += --wrap=devm_memremap
ldflags-y += --wrap=ioremap_nocache
ldflags-y += --wrap=iounmap
+ldflags-y += --wrap=memunmap
ldflags-y += --wrap=__devm_request_region
ldflags-y += --wrap=__request_region
ldflags-y += --wrap=__release_region
diff --git a/tools/testing/nvdimm/test/iomap.c b/tools/testing/nvdimm/test/iomap.c
index ff1e004..179d228 100644
--- a/tools/testing/nvdimm/test/iomap.c
+++ b/tools/testing/nvdimm/test/iomap.c
@@ -89,12 +89,25 @@ void *__wrap_devm_memremap(struct device *dev, resource_size_t offset,
nfit_res = get_nfit_res(offset);
rcu_read_unlock();
if (nfit_res)
- return (void __iomem *) nfit_res->buf + offset
- - nfit_res->res->start;
+ return nfit_res->buf + offset - nfit_res->res->start;
return devm_memremap(dev, offset, size, flags);
}
EXPORT_SYMBOL(__wrap_devm_memremap);
+void *__wrap_memremap(resource_size_t offset, size_t size,
+ unsigned long flags)
+{
+ struct nfit_test_resource *nfit_res;
+
+ rcu_read_lock();
+ nfit_res = get_nfit_res(offset);
+ rcu_read_unlock();
+ if (nfit_res)
+ return nfit_res->buf + offset - nfit_res->res->start;
+ return memremap(offset, size, flags);
+}
+EXPORT_SYMBOL(__wrap_memremap);
+
void __iomem *__wrap_ioremap_nocache(resource_size_t offset, unsigned long size)
{
return __nfit_test_ioremap(offset, size, ioremap_nocache);
@@ -120,6 +133,19 @@ void __wrap_iounmap(volatile void __iomem *addr)
}
EXPORT_SYMBOL(__wrap_iounmap);
+void __wrap_memunmap(void *addr)
+{
+ struct nfit_test_resource *nfit_res;
+
+ rcu_read_lock();
+ nfit_res = get_nfit_res((unsigned long) addr);
+ rcu_read_unlock();
+ if (nfit_res)
+ return;
+ return memunmap(addr);
+}
+EXPORT_SYMBOL(__wrap_memunmap);
+
static struct resource *nfit_test_request_region(struct device *dev,
struct resource *parent, resource_size_t start,
resource_size_t n, const char *name, int flags)
diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
index 28dba91..021e6f9 100644
--- a/tools/testing/nvdimm/test/nfit.c
+++ b/tools/testing/nvdimm/test/nfit.c
@@ -1029,9 +1029,13 @@ static int nfit_test_blk_do_io(struct nd_blk_region *ndbr, resource_size_t dpa,
lane = nd_region_acquire_lane(nd_region);
if (rw)
- memcpy(mmio->base + dpa, iobuf, len);
- else
- memcpy(iobuf, mmio->base + dpa, len);
+ memcpy(mmio->addr.base + dpa, iobuf, len);
+ else {
+ memcpy(iobuf, mmio->addr.base + dpa, len);
+
+ /* give us some some coverage of the mmio_flush_range() API */
+ mmio_flush_range(mmio->addr.base + dpa, len);
+ }
nd_region_release_lane(nd_region, lane);
return 0;
--
2.1.0
5 years, 5 months
[RFC PATCH 0/7] 'struct page' driver for persistent memory
by Dan Williams
When we last left this debate [1] it was becoming clear that the
'page-less' approach left too many I/O scenarios off the table. The
page-less enabling is still useful for avoiding the overhead of struct
page where it is not needed, but in the end, page-backed persistent
memory seems to be a requirement.
With that assumption in place the next debate was where to allocate the
storage for the memmap array, or otherwise reduce the overhead of 'struct
page' with a fancier object like variable length pages.
This series takes the position of mapping persistent memory with
standard 'struct page' and pushes the policy decision of allocating the
storage for the memmap array, from RAM or PMEM, to userspace. It turns
out the best place to allocate 64-bytes per 4K page will be platform
specific.
If PMEM capacities are low then mapping in RAM is a good choice.
Otherwise, for very large capacities storing the memmap in PMEM might be
a better choice. Yet again, PMEM might not have the performance
characteristics favorable to a high rate of change object like 'struct
page'. The kernel can make a reasonable guess, but it seems we will need
to maintain the ability to override any default.
Outside of the new libvdimm sysfs mechanisms to specify the memmap
allocation policy for a given PMEM device, the core of this
implementation is 'struct vmem_altmap'. 'vmem_altmap' alters the memory
hotplug code to optionally use a reserved PMEM-pfn range rather than
dynamic allocation for the memmap.
Only lightly tested so far to confirm valid pfn_to_page() and
page_address() conversions across a range of persistent memory specified
by 'memmap=ss!nn' (kernel command line option to simulate a PMEM
range).
[1]: https://lists.01.org/pipermail/linux-nvdimm/2015-May/000748.html
---
Dan Williams (7):
x86, mm: ZONE_DEVICE for "device memory"
x86, mm: introduce struct vmem_altmap
x86, mm: arch_add_dev_memory()
mm: register_dev_memmap()
libnvdimm, e820: make CONFIG_X86_PMEM_LEGACY a tristate option
libnvdimm, pfn: 'struct page' provider infrastructure
libnvdimm, pmem: 'struct page' for pmem
arch/powerpc/mm/init_64.c | 7 +
arch/x86/Kconfig | 19 ++
arch/x86/include/uapi/asm/e820.h | 2
arch/x86/kernel/Makefile | 2
arch/x86/kernel/pmem.c | 79 +--------
arch/x86/mm/init_64.c | 160 +++++++++++++-----
drivers/nvdimm/Kconfig | 26 +++
drivers/nvdimm/Makefile | 5 +
drivers/nvdimm/btt.c | 8 -
drivers/nvdimm/btt_devs.c | 172 +------------------
drivers/nvdimm/claim.c | 201 ++++++++++++++++++++++
drivers/nvdimm/e820.c | 86 ++++++++++
drivers/nvdimm/namespace_devs.c | 34 +++-
drivers/nvdimm/nd-core.h | 9 +
drivers/nvdimm/nd.h | 59 ++++++-
drivers/nvdimm/pfn.h | 35 ++++
drivers/nvdimm/pfn_devs.c | 334 +++++++++++++++++++++++++++++++++++++
drivers/nvdimm/pmem.c | 213 +++++++++++++++++++++++-
drivers/nvdimm/region.c | 2
drivers/nvdimm/region_devs.c | 19 ++
include/linux/kmap_pfn.h | 33 ++++
include/linux/memory_hotplug.h | 21 ++
include/linux/mm.h | 53 ++++++
include/linux/mmzone.h | 23 +++
mm/kmap_pfn.c | 195 ++++++++++++++++++++++
mm/memory_hotplug.c | 84 ++++++---
mm/page_alloc.c | 18 ++
mm/sparse-vmemmap.c | 60 ++++++-
mm/sparse.c | 44 +++--
tools/testing/nvdimm/Kbuild | 7 +
tools/testing/nvdimm/test/iomap.c | 13 +
31 files changed, 1673 insertions(+), 350 deletions(-)
create mode 100644 drivers/nvdimm/claim.c
create mode 100644 drivers/nvdimm/e820.c
create mode 100644 drivers/nvdimm/pfn.h
create mode 100644 drivers/nvdimm/pfn_devs.c
5 years, 5 months