[ndctl PATCH] ndctl, {create, destroy}-namespace: clarify --force option
by Dan Williams
When 'destroy-namespace' or 'create-namespace --reconfig' operations
encounter a namespace that is mounted, the operation will always fail.
The '--force' option only forces continuation when the namespace is
active and not mounted. If the namespace is mounted then even --force
will fail.
Reported-by: Jeff Balk <jeff.balk(a)intel.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
Documentation/ndctl/ndctl-create-namespace.txt | 11 +++++++----
Documentation/ndctl/ndctl-destroy-namespace.txt | 5 ++++-
2 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/Documentation/ndctl/ndctl-create-namespace.txt b/Documentation/ndctl/ndctl-create-namespace.txt
index 85d1f8db792f..4f1f9849207f 100644
--- a/Documentation/ndctl/ndctl-create-namespace.txt
+++ b/Documentation/ndctl/ndctl-create-namespace.txt
@@ -133,10 +133,13 @@ OPTIONS
-f::
--force::
- Unless this option is specified a 'reconfigure
- namespace' operation will fail if the namespace is presently
- active. Specifying --force causes the namespace to be disabled
- before reconfiguring.
+ Unless this option is specified the 'reconfigure namespace'
+ operation will fail if the namespace is presently active.
+ Specifying --force causes the namespace to be disabled before
+ the operation is attempted. However, if the namespace is
+ mounted then the 'disable namespace' and 'reconfigure
+ namespace' operations will be aborted. The namespace must be
+ unmounted before being reconfigured.
-v::
--verbose::
diff --git a/Documentation/ndctl/ndctl-destroy-namespace.txt b/Documentation/ndctl/ndctl-destroy-namespace.txt
index 8130b2156452..7078c0b21929 100644
--- a/Documentation/ndctl/ndctl-destroy-namespace.txt
+++ b/Documentation/ndctl/ndctl-destroy-namespace.txt
@@ -20,7 +20,10 @@ include::xable-namespace-options.txt[]
Unless this option is specified the 'destroy namespace'
operation will fail if the namespace is presently active.
Specifying --force causes the namespace to be disabled before
- the operation is attempted.
+ the operation is attempted. However, if the namespace is
+ mounted then the 'disable namespace' and 'destroy
+ namespace' operations will be aborted. The namespace must be
+ unmounted before being destroyed.
COPYRIGHT
---------
3 years, 2 months
[ndctl PATCH] ndctl, create-namespace: clarify autolabel failures and fallback
by Dan Williams
The autolabel feature tries to enable labels whenever a namespace in a
label-less mode region is reconfigured. It builds on the assumption that
if the entire capacity is being reconfigured then the operation can try
to assume exclusive ownership of all the DIMMs in that region.
However, if a given DIMM is a member of multiple regions then the
reconfiguration operation cannot assume that ownership. We detect that
case by checking if the DIMM in regionX is still active in regionY after
disabling regionX.
In that case we fail the autolabel, but we should not fail the namespace
reconfiguration. Provide debug messages to indicate why the auto-label
failed, but then try to continue the namespace reconfiguration in
label-less mode.
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
ndctl/namespace.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/ndctl/namespace.c b/ndctl/namespace.c
index 077d5968d0e8..b780924ecf4c 100644
--- a/ndctl/namespace.c
+++ b/ndctl/namespace.c
@@ -901,8 +901,10 @@ static int enable_labels(struct ndctl_region *region)
count = 0;
ndctl_dimm_foreach_in_region(region, dimm)
if (ndctl_dimm_is_active(dimm)) {
+ warning("%s is active in %s, failing autolabel\n",
+ ndctl_dimm_get_devname(dimm),
+ ndctl_region_get_devname(region));
count++;
- break;
}
/* some of the dimms belong to multiple regions?? */
@@ -945,7 +947,7 @@ out:
if (ndctl_region_get_nstype(region) != ND_DEVICE_NAMESPACE_PMEM) {
debug("%s: failed to initialize labels\n",
ndctl_region_get_devname(region));
- return -ENXIO;
+ return -EBUSY;
}
return 0;
@@ -968,9 +970,8 @@ static int namespace_reconfig(struct ndctl_region *region,
/* check if we can enable labels on this region */
if (ndctl_region_get_nstype(region) == ND_DEVICE_NAMESPACE_IO
&& p.autolabel) {
- rc = enable_labels(region);
- if (rc)
- return rc;
+ /* if this fails, try to continue label-less */
+ enable_labels(region);
}
ndns = region_get_namespace(region);
3 years, 2 months
[PATCH 0/4] fix device-dax pud crash and fixup {pte,pmd,pud}_write
by Dan Williams
Andrew,
Here is a new version to the pud_write() fix [1], and some follow-on
patches to use the '_access_permitted' helpers in fault and
get_user_pages() paths where we are checking if the thread has access to
write. I explicitly omit conversions for places where the kernel is
checking the _PAGE_RW flag for kernel purposes, not for userspace
access.
Beyond fixing the crash, this series also fixes get_user_pages() and
fault paths to honor protection keys in the same manner as
get_user_pages_fast(). Only the crash fix is tagged for -stable as the
protection key check is done just for consistency reasons since
userspace can change protection keys at will.
[1]: https://lists.01.org/pipermail/linux-nvdimm/2017-November/013237.html
---
Dan Williams (4):
mm: fix device-dax pud write-faults triggered by get_user_pages()
mm: replace pud_write with pud_access_permitted in fault + gup paths
mm: replace pmd_write with pmd_access_permitted in fault + gup paths
mm: replace pte_write with pte_access_permitted in fault + gup paths
arch/sparc/mm/gup.c | 4 ++--
arch/x86/include/asm/pgtable.h | 6 ++++++
fs/dax.c | 3 ++-
include/asm-generic/pgtable.h | 9 +++++++++
include/linux/hugetlb.h | 8 --------
mm/gup.c | 2 +-
mm/hmm.c | 8 ++++----
mm/huge_memory.c | 6 +++---
mm/memory.c | 8 ++++----
9 files changed, 31 insertions(+), 23 deletions(-)
3 years, 2 months
[ndctl PATCH] ndctl, test: improve dax gup test coverage
by Dan Williams
Add tests for read-only to read-write conversions of dax mappings and
for the gup slow path where the initial fault is a write fault. The
gup-slow-path write fault path has been broken for the pud case since it
was introduced upstream.
Cc: Dave Jiang <dave.jiang(a)intel.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
test/dax-pmd.c | 39 +++++++++++++++++++++++++++++++++++----
1 file changed, 35 insertions(+), 4 deletions(-)
diff --git a/test/dax-pmd.c b/test/dax-pmd.c
index 6276913a0fda..1a296aff32ea 100644
--- a/test/dax-pmd.c
+++ b/test/dax-pmd.c
@@ -27,8 +27,10 @@
#include <linux/fiemap.h>
#define NUM_EXTENTS 5
-#define fail() fprintf(stderr, "%s: failed at: %d\n", __func__, __LINE__)
-#define faili(i) fprintf(stderr, "%s: failed at: %d: %d\n", __func__, __LINE__, i)
+#define fail() fprintf(stderr, "%s: failed at: %d (%s)\n", \
+ __func__, __LINE__, strerror(errno))
+#define faili(i) fprintf(stderr, "%s: failed at: %d: %d (%s)\n", \
+ __func__, __LINE__, i, strerror(errno))
#define TEST_FILE "test_dax_data"
int test_dax_directio(int dax_fd, unsigned long align, void *dax_addr, off_t offset)
@@ -39,7 +41,7 @@ int test_dax_directio(int dax_fd, unsigned long align, void *dax_addr, off_t off
if (posix_memalign(&buf, 4096, 4096) != 0)
return -ENOMEM;
- for (i = 0; i < 3; i++) {
+ for (i = 0; i < 5; i++) {
void *addr = mmap(dax_addr, 2*align,
PROT_READ|PROT_WRITE, MAP_SHARED, dax_fd,
offset);
@@ -62,11 +64,20 @@ int test_dax_directio(int dax_fd, unsigned long align, void *dax_addr, off_t off
fprintf(stderr, "%s: test: %d\n", __func__, i);
rc = 0;
switch (i) {
- case 0: /* test O_DIRECT of unfaulted address */
+ case 0: /* test O_DIRECT read of unfaulted address */
if (write(fd2, addr, 4096) != 4096) {
faili(i);
rc = -ENXIO;
}
+
+ /*
+ * test O_DIRECT write of pre-faulted read-only
+ * address
+ */
+ if (pread(fd2, addr, 4096, 0) != 4096) {
+ faili(i);
+ rc = -ENXIO;
+ }
break;
case 1: /* test O_DIRECT of pre-faulted address */
sprintf(addr, "odirect data");
@@ -100,6 +111,26 @@ int test_dax_directio(int dax_fd, unsigned long align, void *dax_addr, off_t off
} else
faili(i);
break;
+ case 3: /* convert ro mapping to rw */
+ rc = *(volatile int *) addr;
+ *(volatile int *) addr = rc;
+ rc = 0;
+ break;
+ case 4: /* test O_DIRECT write of unfaulted address */
+ sprintf(buf, "O_DIRECT write of unfaulted address\n");
+ if (pwrite(fd2, buf, 4096, 0) < 4096) {
+ faili(i);
+ rc = -ENXIO;
+ break;
+ }
+
+ if (pread(fd2, addr, 4096, 0) < 4096) {
+ faili(i);
+ rc = -ENXIO;
+ break;
+ }
+ rc = 0;
+ break;
default:
faili(i);
rc = -ENXIO;
3 years, 2 months
转发:井
by 井庇
---- 原邮件信息 -----
发件人:井庇<tq(a)oyiks.com>
收件人: <linux-nvdimm(a)lists.01.org>;
发送时间:2017-11-10 19:04:55
国际贸易离岸操作、国际避税暨离岸金融税务贸易筹划
课程背景
进出口企业为什么应当选择国际贸易离岸业务?为什么说全球贸易的绝大部分来自于离岸操作的运营结果?这其中到底隐含着什么样的诀窍?
除了不断地降低企业的运营和采购成本外,还有没有其他方法降低企业的国际运营成本?国际企业为什么要把总部设在国际离岸港?
面对全球的税务稽查以及越来越严格国内税收与海关监管,您的企业还有什么办法来降低国际运营的税务成本?究竟应怎样运营离岸税收筹划?如何通过跨国离岸操作来规避全球的税务与物流监管?
人民币升值是长期趋势,人民币国际化是必然趋势,随着人民币跨境业务和人民币离岸港业务的逐步发展,进出口企业又该如何应对?如何利用人民币跨境业务和人民币离岸港业务降低国际结算中的外汇风险?财务风险?如何利用人民币离岸港业务降低企业国际运营成本?税务成本?
就企业的国际贸易离岸业务而言,应在企业内部选择和组建什么样的离岸运营团队?为什么公司高层、海外运营总监、财务总监和物流总监是离岸操作的直接运营者?这其中,各自又应扮演什么样的角色?
课程大纲
一、国际离岸操作经典案例分享
1、离岸控股公司案例分享
2、离岸上市案例分享
3、离岸结算中心案例分享
4、投资所得离岸税务筹划案例分享
5、离岸贸易与离岸外包案例分享
6、离岸信托与杠杆投资案例分享
7、品牌离岸案例分享
8、离岸保税物流与全球供应链案例分享
9、四两拨千斤的离岸投资案例分享
10、离岸中心与离岸跨境融资案例分享
二、离岸业务解析与海外离岸公司设计策略
1、离岸业务利弊分析及操作时应注意的疑难
2、欧美设立的离岸公司的利弊分析及运营疑难解析
3、新加坡、香港、迪拜离岸公司业务利弊分析及相关企业运营
4、开曼群岛、BVI等避税港离岸公司的业务操作与利弊分析
5、投资性离岸公司选择及适合的企业
6、离岸港业务与相应国际贸易离岸业务运营
7、人民币离岸港与人民币离岸投融资
8、离岸贸易公司、离岸控股公司与离岸融资公司业务对进出口企业的不同影响
9、离岸公司资本运作与税务运作不同的路线图解读
10、全球离岸业务成本成本比较及业务优势对比
三、离岸贸易与离岸单证操作技巧
1、三角贸易与离岸贸易税务筹划要点解析
2、离岸业务操作中的合同关系人及相关风险解析与规避
3、离岸贸易业务合同
4、离岸业务三角两地船务操作及船务关联
5、离岸关务操作、关务手续和产地规则、产地证
6、离岸业务商业单据操作
7、离岸贸易物流单据与提到制作
8、离岸业务结算凭证(跟单结算还是非跟单结算)
9、离岸业务单证制作和制单中应注意的问题
10、离岸结算业务操作实战(合理筹划赢利的位置)
11、离岸业务中的操作风险分析和风险规避
12、离岸公司业务操作中的几大致命问题
13、离岸外包业务操作技巧与海关监管风险
14、离岸保税物流操作解析
15、离岸业务与换单操作要点解析
四、关联交易与离岸税务筹划
1、关联交易与跨境税务稽查
2、贸易保护主义盛行与离岸落地业务
3、美国的FACT法案与居民全球征税机制
4、税总16号公告解读
5、16号公告与VIE结构及其相应离岸投资筹划
6、16号公告与关联交易及其相应的所得税筹划
7、16号公告对离岸架构设计的影响
8、离岸运营所得与离岸投资所得规避
9、双重税务协定与离岸投资所得筹划
10、进口税与进口所得筹划
11、出口增值税的退税和税费抵扣
12、离岸出口税务筹划的要点和风险解析
13、集团内部“转移价格”与外部“三角合作”策略
14、蚂蚁搬家与风险分散
15、利润转移和运营所得筹划
16、国际上及我国对离岸企业的税收规避有何限制,如何规避
17、中国小公司如何利用转移定价实现合理避税
18、离岸操作与国家的出口退税政策的关联
19、海关保税物流监管新趋势与相应的国际离岸操作技巧
20、保税物流与保税港
21、保税港区的优势与离岸税费筹划
22、保税港区企业运作与离岸操作策略
23、离岸公司注册地、离岸账户开设地域保税港业务执行地三者综合业务操作实战解析
五、离岸投资与资产离岸转移
1、离岸投资
2、离岸上市(造壳与借壳)
3、风头与PE的离岸化运作
4、资产离岸转移
5、避税港在资产离岸转移中的作用
6、自然人资产离岸法人化的利弊分析
7、离岸资产法人化的两条途径
六、离岸信托
1、国际信托业务解析
2、离岸信托与离岸金融业务
3、离岸信托理财规划与离岸信托税务筹划
4、离岸海外信托之权益与资产转移
5、海外信托与离岸操作的结合使用
6、离岸信托与跨境资产信托转移
7、离岸信托投资与直接投资的利弊分析
8、信托、风头与PE
七、离岸操作与跨境融资
1、境金融与离岸金融对比操作
2、岸金融与人民币离岸港
3、跨境融资离岸操作的策略
1)国际贸易结算方式融资
2)国际贸易中长期融资
3)互联网平台跨境融资
4)跨境资产管理融资
5)跨境股权与跨境债权融资
6)金融衍生品跨境融资
7)离岸融资
8)跨境租赁融资
9)中小微企业跨境上市融资
10)离岸发债与离岸信托
11)跨境小额融资业务
12)双向资金池跨境融资
13)跨境贷业务
八、离岸金融业务与离岸结算
1、离岸金融账户内涵与风险解析
2、中国的“一带一路”与离岸业务金融化
3、国际人民币清算中心与人民币离岸港业务
4、离岸人民币账户的优势
5、人民币离岸港与人民币清算中心
6、FTA账户与OSA账户对比分析
7、离岸结算如何降低企业国际运营成本
8、人民币波动与离岸结算
9、人民币汇率与影响中长期人民币汇率的国内外因素
10、银行的离岸金融业务及对企业操作离岸业务的影响
11、离岸金融业务中的企业资金汇兑和投资
12、离岸业务中的金融、税务监管和监管规避
13、金融外汇政策的变革在对外贸易上是怎样与海关、国税政策相互协调的?
14、如何使用离岸金融业务操作规避汇率波动风险?
15、离岸金融账户与离岸保值业务
16、外汇保值业务与保值条款中的离岸操作手法
17、境内外外汇划帐、提现、调度与外汇管制的规避
18、离岸人民币境内划帐和提现如何轻松实现?
19、离岸公司的海外人民币帐户和集团内部结算
20、外贸公司的离岸化操作与离岸清算
21、海外客户的信用证或货款的受益人应如何选择?有何利弊?
22、国内公司操作和离岸公司操作各自的利弊?
23、如何根据业务需要选择国内公司操作或离岸操作
24、企业海外避税路径选择海外总部还是离岸中心
25、SOHU一族如何巧妙利用离岸业务操作
26、离岸操作中的国内外资金调度的N种方法
27、离岸操作与人民币跨境业务/人民币离岸港业务
28、人民币跨境业务的政策细节及企业应如何认知和解读?
29、人民币跨境结算业务对进出口企业的国际结算有何益处?企业应如何实施操作?为什么目前的状况并没有给企业带来多大益处?
30、人民币跨境结算业务对企业的涉外税收筹划和出口退税筹划有何影响?企业应如何利用相关的优势?
31、针对目前的人民币跨境结算业务,涉外企业应如何进行相应的海关、税务、外汇、结算和贸易安排?
32、针对目前人民币跨境结算业务,涉外企业在单证上应注意些什么?进出口企业的单证管理应做怎样的调整?
33、海外离岸公司在人民币跨境结算业务中能起到什么样的作用?相关的操作策略是怎样的?应如何避免其中带来的相关稽查风险?
34、人民币跨境结算中的离岸操作与海外离岸金融业务
35、人民币离岸港业务与国际贸易离岸业务及离岸金融账户的关系?
36、人民币离岸港业务中的单证操作?
37、香港/新加坡的人民币离岸港业务解读
38、海外离岸公司业务与离岸港业务
39、人民币离岸港与人民币离岸清算中心
九、自贸区政策解析与自贸区离岸金融业务运营解析
1、自贸区金融与离岸金融政策解析
2、自贸区离岸金融业务
3、自贸区FTA账户与跨境融资平台
十、涉外企业离岸操作团队管理
专家介绍
陈硕 老师,双硕士(美国马里兰大学MBA、南开大学经济学硕士),省外经贸厅贸易顾问,香港国际经济管理学院客座教授,香港贸易促进会会员,《粤港中小企业贸易论坛》主讲人之一,曾任世界500强的中国粮油食品进出口集团公司进出口六部部长,中粮集团驻香港、加拿大商务主办,惠尔普(美)上海有限公司南区经理和运营总监、深圳某著名商务咨询有限公司总经理、香港IBT国际商务咨询有限公司国 际贸 易首席咨询师,东京丸一综合商社西北区市场总监、珠海威玛石油设备进出口公司总经理,多年来先后在上海、北京、山东、浙江、江苏、四川等地举行了600多场国际贸易专题讲座,为近百家企业做过专门内训或顾 问服务,受到企业和学员的广泛好评,陈先生具有极深厚的理论知识和实践经验,是典型的实战派讲师,20多年的外贸业务管理经验,特别在处理外贸业务疑难问题方面有独到的见解和技巧,具有娴熟的业务技能和极佳的语言表达能力,对国际贸易术语有极精确的理解,同时陈先生还将传授很多创造性、边缘性的操作手法与心得同大家交流和分享,每次都令学员意犹未尽,茅塞顿开。
【时间地点】2017年11月18-19日上 海 11月25-26日深圳
【收费标准】¥3.2.0.0.元/人
【培训对象】公司高层、进出口业务主管、营销策划部门、财务与融资管理部门、人力资源管理部门、物流与国际物流操作与管理部门
联系电话: 0755-61288035 021-31261580
咨询报名QQ: 6983436 手机:18890700600(微信同号)
(如需参加 请用 以上联系方式 索取报名表)
3 years, 2 months
[RFC PATCH] mm: fix device-dax pud write-faults triggered by get_user_pages()
by Dan Williams
Currently only get_user_pages_fast() can safely handle the writable gup
case due to its use of pud_access_permitted() to check whether the pud
entry is writable. In the gup slow path pud_write() is used instead of
pud_access_permitted() and to date it has been unimplemented, just calls
BUG_ON().
kernel BUG at ./include/linux/hugetlb.h:244!
[..]
RIP: 0010:follow_devmap_pud+0x482/0x490
[..]
Call Trace:
follow_page_mask+0x28c/0x6e0
__get_user_pages+0xe4/0x6c0
get_user_pages_unlocked+0x130/0x1b0
get_user_pages_fast+0x89/0xb0
iov_iter_get_pages_alloc+0x114/0x4a0
nfs_direct_read_schedule_iovec+0xd2/0x350
? nfs_start_io_direct+0x63/0x70
nfs_file_direct_read+0x1e0/0x250
nfs_file_read+0x90/0xc0
Use pud_access_permitted() to implement pud_write(), a later cleanup can
remove {pte,pmd,pud}_write and replace them with
{pte,pmd,pud}_access_permitted() drectly so that we only have one set of
helpers these kinds of checks. For now, implementing pud_write()
simplifies -stable backports.
Cc: <stable(a)vger.kernel.org>
Cc: Dave Hansen <dave.hansen(a)intel.com>
Fixes: a00cc7d9dd93 ("mm, x86: add support for PUD-sized transparent hugepages")
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
Sending this as RFC for opinion on whether this should just be a
pud_flags() & _PAGE_RW check, like pmd_write, or pud_access_permitted()
that also takes protection keys into account.
include/linux/hugetlb.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index fbf5b31d47ee..6a142b240ef7 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -242,8 +242,7 @@ static inline int pgd_write(pgd_t pgd)
#ifndef pud_write
static inline int pud_write(pud_t pud)
{
- BUG();
- return 0;
+ return pud_access_permitted(pud, WRITE);
}
#endif
3 years, 2 months
回复:
by 贡叠
如何 更有效 的 档 案 管 理,
详 情请 关 注 附 件 压 缩 包 文件
信言不美,美言不信。
3 years, 2 months
[ndctl PATCH v2] libndctl, nfit: Fix in/out sizes for error injection commands
by Vishal Verma
The input/output size bounds being set in the various nd_bus_cmd_new_*
helpers for error injection commands were larger than they needed to be,
and platforms could reject these. Fix the bounds to be exactly as the
spec describes.
Cc: Dan Williams <dan.j.williams(a)intel.com>
Reported-by: Dariusz Dokupil <dariusz.dokupil(a)intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma(a)intel.com>
---
ndctl/lib/nfit.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
v2: set the in/out sizes based explicitly on the sizes/offsets in the
cmd structure so that their relationship becomes obvious (Dan).
diff --git a/ndctl/lib/nfit.c b/ndctl/lib/nfit.c
index fb6af32..2ae3f07 100644
--- a/ndctl/lib/nfit.c
+++ b/ndctl/lib/nfit.c
@@ -164,9 +164,9 @@ struct ndctl_cmd *ndctl_bus_cmd_new_err_inj(struct ndctl_bus *bus)
cmd->status = 1;
pkg = (struct nd_cmd_pkg *)&cmd->cmd_buf[0];
pkg->nd_command = NFIT_CMD_ARS_INJECT_SET;
- pkg->nd_size_in = (2 * sizeof(u64)) + sizeof(u32);
- pkg->nd_size_out = cmd_length;
- pkg->nd_fw_size = cmd_length;
+ pkg->nd_size_in = offsetof(struct nd_cmd_ars_err_inj, status);
+ pkg->nd_size_out = cmd_length - pkg->nd_size_in;
+ pkg->nd_fw_size = pkg->nd_size_out;
err_inj = (struct nd_cmd_ars_err_inj *)&pkg->nd_payload[0];
cmd->firmware_status = &err_inj->status;
@@ -193,9 +193,9 @@ struct ndctl_cmd *ndctl_bus_cmd_new_err_inj_clr(struct ndctl_bus *bus)
cmd->status = 1;
pkg = (struct nd_cmd_pkg *)&cmd->cmd_buf[0];
pkg->nd_command = NFIT_CMD_ARS_INJECT_CLEAR;
- pkg->nd_size_in = 2 * sizeof(u64);
- pkg->nd_size_out = cmd_length;
- pkg->nd_fw_size = cmd_length;
+ pkg->nd_size_in = offsetof(struct nd_cmd_ars_err_inj_clr, status);
+ pkg->nd_size_out = cmd_length - pkg->nd_size_in;
+ pkg->nd_fw_size = pkg->nd_size_out;
err_inj_clr = (struct nd_cmd_ars_err_inj_clr *)&pkg->nd_payload[0];
cmd->firmware_status = &err_inj_clr->status;
@@ -224,9 +224,9 @@ struct ndctl_cmd *ndctl_bus_cmd_new_err_inj_stat(struct ndctl_bus *bus,
cmd->status = 1;
pkg = (struct nd_cmd_pkg *)&cmd->cmd_buf[0];
pkg->nd_command = NFIT_CMD_ARS_INJECT_GET;
- pkg->nd_size_in = cmd_length;
+ pkg->nd_size_in = 0;
pkg->nd_size_out = cmd_length + buf_size;
- pkg->nd_fw_size = cmd_length + buf_size;
+ pkg->nd_fw_size = pkg->nd_size_out;
err_inj_stat = (struct nd_cmd_ars_err_inj_stat *)&pkg->nd_payload[0];
cmd->firmware_status = &err_inj_stat->status;
--
2.9.5
3 years, 2 months
[PATCH] vmalloc: introduce vmap_pfn for persistent memory
by Mikulas Patocka
Hi
I am developing a driver that uses persistent memory for caching. A
persistent memory device can be mapped in several discontiguous ranges.
The kernel has a function vmap that takes an array of pointers to pages
and maps these pages to contiguous linear address space. However, it can't
be used on persistent memory because persistent memory may not be backed
by page structures.
This patch introduces a new function vmap_pfn, it works like vmap, but
takes an array of pfn_t - so it can be used on persistent memory.
This is an example how vmap_pfn is used:
https://www.redhat.com/archives/dm-devel/2017-November/msg00026.html (see
the function persistent_memory_claim)
Mikulas
From: Mikulas Patocka <mpatocka(a)redhat.com>
There's a function vmap that can take discontiguous pages and map them
linearly to the vmalloc space. However, persistent memory may not be
backed by pages, so we can't use vmap on it.
This patch introduces a function vmap_pfn that works like vmap, but it
takes an array of page frame numbers (pfn_t). It can be used to remap
discontiguous chunks of persistent memory into a linear range.
Signed-off-by: Mikulas Patocka <mpatocka(a)redhat.com>
---
include/linux/vmalloc.h | 2 +
mm/vmalloc.c | 88 +++++++++++++++++++++++++++++++++++++-----------
2 files changed, 71 insertions(+), 19 deletions(-)
Index: linux-2.6/include/linux/vmalloc.h
===================================================================
--- linux-2.6.orig/include/linux/vmalloc.h
+++ linux-2.6/include/linux/vmalloc.h
@@ -98,6 +98,8 @@ extern void vfree_atomic(const void *add
extern void *vmap(struct page **pages, unsigned int count,
unsigned long flags, pgprot_t prot);
+extern void *vmap_pfn(pfn_t *pfns, unsigned int count,
+ unsigned long flags, pgprot_t prot);
extern void vunmap(const void *addr);
extern int remap_vmalloc_range_partial(struct vm_area_struct *vma,
Index: linux-2.6/mm/vmalloc.c
===================================================================
--- linux-2.6.orig/mm/vmalloc.c
+++ linux-2.6/mm/vmalloc.c
@@ -31,6 +31,7 @@
#include <linux/compiler.h>
#include <linux/llist.h>
#include <linux/bitops.h>
+#include <linux/pfn_t.h>
#include <linux/uaccess.h>
#include <asm/tlbflush.h>
@@ -132,7 +133,7 @@ static void vunmap_page_range(unsigned l
}
static int vmap_pte_range(pmd_t *pmd, unsigned long addr,
- unsigned long end, pgprot_t prot, struct page **pages, int *nr)
+ unsigned long end, pgprot_t prot, struct page **pages, pfn_t *pfns, int *nr)
{
pte_t *pte;
@@ -145,20 +146,25 @@ static int vmap_pte_range(pmd_t *pmd, un
if (!pte)
return -ENOMEM;
do {
- struct page *page = pages[*nr];
-
+ unsigned long pf;
+ if (pages) {
+ struct page *page = pages[*nr];
+ if (WARN_ON(!page))
+ return -ENOMEM;
+ pf = page_to_pfn(page);
+ } else {
+ pf = pfn_t_to_pfn(pfns[*nr]);
+ }
if (WARN_ON(!pte_none(*pte)))
return -EBUSY;
- if (WARN_ON(!page))
- return -ENOMEM;
- set_pte_at(&init_mm, addr, pte, mk_pte(page, prot));
+ set_pte_at(&init_mm, addr, pte, pfn_pte(pf, prot));
(*nr)++;
} while (pte++, addr += PAGE_SIZE, addr != end);
return 0;
}
static int vmap_pmd_range(pud_t *pud, unsigned long addr,
- unsigned long end, pgprot_t prot, struct page **pages, int *nr)
+ unsigned long end, pgprot_t prot, struct page **pages, pfn_t *pfns, int *nr)
{
pmd_t *pmd;
unsigned long next;
@@ -168,14 +174,14 @@ static int vmap_pmd_range(pud_t *pud, un
return -ENOMEM;
do {
next = pmd_addr_end(addr, end);
- if (vmap_pte_range(pmd, addr, next, prot, pages, nr))
+ if (vmap_pte_range(pmd, addr, next, prot, pages, pfns, nr))
return -ENOMEM;
} while (pmd++, addr = next, addr != end);
return 0;
}
static int vmap_pud_range(p4d_t *p4d, unsigned long addr,
- unsigned long end, pgprot_t prot, struct page **pages, int *nr)
+ unsigned long end, pgprot_t prot, struct page **pages, pfn_t *pfns, int *nr)
{
pud_t *pud;
unsigned long next;
@@ -185,14 +191,14 @@ static int vmap_pud_range(p4d_t *p4d, un
return -ENOMEM;
do {
next = pud_addr_end(addr, end);
- if (vmap_pmd_range(pud, addr, next, prot, pages, nr))
+ if (vmap_pmd_range(pud, addr, next, prot, pages, pfns, nr))
return -ENOMEM;
} while (pud++, addr = next, addr != end);
return 0;
}
static int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
- unsigned long end, pgprot_t prot, struct page **pages, int *nr)
+ unsigned long end, pgprot_t prot, struct page **pages, pfn_t *pfns, int *nr)
{
p4d_t *p4d;
unsigned long next;
@@ -202,7 +208,7 @@ static int vmap_p4d_range(pgd_t *pgd, un
return -ENOMEM;
do {
next = p4d_addr_end(addr, end);
- if (vmap_pud_range(p4d, addr, next, prot, pages, nr))
+ if (vmap_pud_range(p4d, addr, next, prot, pages, pfns, nr))
return -ENOMEM;
} while (p4d++, addr = next, addr != end);
return 0;
@@ -215,7 +221,7 @@ static int vmap_p4d_range(pgd_t *pgd, un
* Ie. pte at addr+N*PAGE_SIZE shall point to pfn corresponding to pages[N]
*/
static int vmap_page_range_noflush(unsigned long start, unsigned long end,
- pgprot_t prot, struct page **pages)
+ pgprot_t prot, struct page **pages, pfn_t *pfns)
{
pgd_t *pgd;
unsigned long next;
@@ -227,7 +233,7 @@ static int vmap_page_range_noflush(unsig
pgd = pgd_offset_k(addr);
do {
next = pgd_addr_end(addr, end);
- err = vmap_p4d_range(pgd, addr, next, prot, pages, &nr);
+ err = vmap_p4d_range(pgd, addr, next, prot, pages, pfns, &nr);
if (err)
return err;
} while (pgd++, addr = next, addr != end);
@@ -236,11 +242,11 @@ static int vmap_page_range_noflush(unsig
}
static int vmap_page_range(unsigned long start, unsigned long end,
- pgprot_t prot, struct page **pages)
+ pgprot_t prot, struct page **pages, pfn_t *pfns)
{
int ret;
- ret = vmap_page_range_noflush(start, end, prot, pages);
+ ret = vmap_page_range_noflush(start, end, prot, pages, pfns);
flush_cache_vmap(start, end);
return ret;
}
@@ -1191,7 +1197,7 @@ void *vm_map_ram(struct page **pages, un
addr = va->va_start;
mem = (void *)addr;
}
- if (vmap_page_range(addr, addr + size, prot, pages) < 0) {
+ if (vmap_page_range(addr, addr + size, prot, pages, NULL) < 0) {
vm_unmap_ram(mem, count);
return NULL;
}
@@ -1306,7 +1312,7 @@ void __init vmalloc_init(void)
int map_kernel_range_noflush(unsigned long addr, unsigned long size,
pgprot_t prot, struct page **pages)
{
- return vmap_page_range_noflush(addr, addr + size, prot, pages);
+ return vmap_page_range_noflush(addr, addr + size, prot, pages, NULL);
}
/**
@@ -1347,13 +1353,24 @@ void unmap_kernel_range(unsigned long ad
}
EXPORT_SYMBOL_GPL(unmap_kernel_range);
+static int map_vm_area_pfn(struct vm_struct *area, pgprot_t prot, pfn_t *pfns)
+{
+ unsigned long addr = (unsigned long)area->addr;
+ unsigned long end = addr + get_vm_area_size(area);
+ int err;
+
+ err = vmap_page_range(addr, end, prot, NULL, pfns);
+
+ return err > 0 ? 0 : err;
+}
+
int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages)
{
unsigned long addr = (unsigned long)area->addr;
unsigned long end = addr + get_vm_area_size(area);
int err;
- err = vmap_page_range(addr, end, prot, pages);
+ err = vmap_page_range(addr, end, prot, pages, NULL);
return err > 0 ? 0 : err;
}
@@ -1660,6 +1677,39 @@ void *vmap(struct page **pages, unsigned
}
EXPORT_SYMBOL(vmap);
+/**
+ * vmap_pfn - map an array of pages into virtually contiguous space
+ * @pfns: array of page frame numbers
+ * @count: number of pages to map
+ * @flags: vm_area->flags
+ * @prot: page protection for the mapping
+ *
+ * Maps @count pages from @pages into contiguous kernel virtual
+ * space.
+ */
+void *vmap_pfn(pfn_t *pfns, unsigned int count, unsigned long flags, pgprot_t prot)
+{
+ struct vm_struct *area;
+ unsigned long size; /* In bytes */
+
+ might_sleep();
+
+ size = (unsigned long)count << PAGE_SHIFT;
+ if (unlikely((size >> PAGE_SHIFT) != count))
+ return NULL;
+ area = get_vm_area_caller(size, flags, __builtin_return_address(0));
+ if (!area)
+ return NULL;
+
+ if (map_vm_area_pfn(area, prot, pfns)) {
+ vunmap(area->addr);
+ return NULL;
+ }
+
+ return area->addr;
+}
+EXPORT_SYMBOL(vmap_pfn);
+
static void *__vmalloc_node(unsigned long size, unsigned long align,
gfp_t gfp_mask, pgprot_t prot,
int node, const void *caller);
3 years, 2 months
[ndctl PATCH] libndctl, nfit: Fix in/out sizes for error injection commands
by Vishal Verma
The input/output size bounds being set in the various nd_bus_cmd_new_*
helpers for error injection commands were larger than they needed to be,
and platforms could reject these. Fix the bounds to be exactly as the
spec describes.
Cc: Dan Williams <dan.j.williams(a)intel.com>
Reported-by: Dariusz Dokupil <dariusz.dokupil(a)intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma(a)intel.com>
---
ndctl/lib/nfit.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/ndctl/lib/nfit.c b/ndctl/lib/nfit.c
index fb6af32..6346fd9 100644
--- a/ndctl/lib/nfit.c
+++ b/ndctl/lib/nfit.c
@@ -164,9 +164,9 @@ struct ndctl_cmd *ndctl_bus_cmd_new_err_inj(struct ndctl_bus *bus)
cmd->status = 1;
pkg = (struct nd_cmd_pkg *)&cmd->cmd_buf[0];
pkg->nd_command = NFIT_CMD_ARS_INJECT_SET;
- pkg->nd_size_in = (2 * sizeof(u64)) + sizeof(u32);
- pkg->nd_size_out = cmd_length;
- pkg->nd_fw_size = cmd_length;
+ pkg->nd_size_in = (2 * sizeof(u64)) + sizeof(u8);
+ pkg->nd_size_out = sizeof(u32);
+ pkg->nd_fw_size = sizeof(u32);
err_inj = (struct nd_cmd_ars_err_inj *)&pkg->nd_payload[0];
cmd->firmware_status = &err_inj->status;
@@ -194,8 +194,8 @@ struct ndctl_cmd *ndctl_bus_cmd_new_err_inj_clr(struct ndctl_bus *bus)
pkg = (struct nd_cmd_pkg *)&cmd->cmd_buf[0];
pkg->nd_command = NFIT_CMD_ARS_INJECT_CLEAR;
pkg->nd_size_in = 2 * sizeof(u64);
- pkg->nd_size_out = cmd_length;
- pkg->nd_fw_size = cmd_length;
+ pkg->nd_size_out = sizeof(u32);
+ pkg->nd_fw_size = sizeof(u32);
err_inj_clr = (struct nd_cmd_ars_err_inj_clr *)&pkg->nd_payload[0];
cmd->firmware_status = &err_inj_clr->status;
@@ -224,9 +224,9 @@ struct ndctl_cmd *ndctl_bus_cmd_new_err_inj_stat(struct ndctl_bus *bus,
cmd->status = 1;
pkg = (struct nd_cmd_pkg *)&cmd->cmd_buf[0];
pkg->nd_command = NFIT_CMD_ARS_INJECT_GET;
- pkg->nd_size_in = cmd_length;
- pkg->nd_size_out = cmd_length + buf_size;
- pkg->nd_fw_size = cmd_length + buf_size;
+ pkg->nd_size_in = 0;
+ pkg->nd_size_out = (2 * sizeof(u32)) + buf_size;
+ pkg->nd_fw_size = (2 * sizeof(u32)) + buf_size;
err_inj_stat = (struct nd_cmd_ars_err_inj_stat *)&pkg->nd_payload[0];
cmd->firmware_status = &err_inj_stat->status;
--
2.9.5
3 years, 2 months