Skip to content

Commit f7e6bd3

Browse files
author
Alexei Starovoitov
committed
Merge branch 'bpf-support-new-insns-from-cpu-v4'
Yonghong Song says: ==================== bpf: Support new insns from cpu v4 In previous discussion ([1]), it is agreed that we should introduce cpu version 4 (llvm flag -mcpu=v4) which contains some instructions which can simplify code, make code easier to understand, fix the existing problem, or simply for feature completeness. More specifically, the following new insns are proposed: . sign extended load . sign extended mov . bswap . signed div/mod . ja with 32-bit offset This patch set added kernel support for insns proposed in [1] except BPF_ST which already has full kernel support. Beside the above proposed insns, LLVM will generate BPF_ST insn as well under -mcpu=v4. The llvm patch ([2]) has been merged into llvm-project 'main' branch. The patchset implements interpreter, jit and verifier support for these new insns. For this patch set, I tested cpu v2/v3/v4 and the selftests are all passed. I also tested selftests introduced in this patch set with additional changes beside normal jit testing (bpf_jit_enable = 1 and bpf_jit_harden = 0) - bpf_jit_enable = 0 - bpf_jit_enable = 1 and bpf_jit_harden = 1 and both testing passed. [1] https://lore.kernel.org/bpf/4bfe98be-5333-1c7e-2f6d-42486c8ec039@meta.com/ [2] https://reviews.llvm.org/D144829 Changelogs: v4 -> v5: . for v4, patch 8/17 missed in mailing list and patchwork, so resend. . rebase on top of master v3 -> v4: . some minor asm syntax adjustment based on llvm change. . add clang version and target arch guard for new tests so they can still compile with old llvm compilers. . some changes to the bpf doc. v2 -> v3: . add missed disasm change from v2. . handle signed load of ctx fields properly. . fix some interpreter sdiv/smod error when bpf_jit_enable = 0. . fix some verifier range bounding errors. . add more C tests. RFCv1 -> v2: . add more verifier supports for signed extend load and mov insns. . rename some insn names to be more consistent with intel practice. . add cpuv4 test runner for test progs. . add more unit and C tests. . add documentation. ==================== Link: https://lore.kernel.org/r/20230728011143.3710005-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 parents 10d78a6 + 245d4c4 commit f7e6bd3

21 files changed

Lines changed: 2251 additions & 170 deletions

File tree

Documentation/bpf/bpf_design_QA.rst

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -140,11 +140,6 @@ A: Because if we picked one-to-one relationship to x64 it would have made
140140
it more complicated to support on arm64 and other archs. Also it
141141
needs div-by-zero runtime check.
142142

143-
Q: Why there is no BPF_SDIV for signed divide operation?
144-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
145-
A: Because it would be rarely used. llvm errors in such case and
146-
prints a suggestion to use unsigned divide instead.
147-
148143
Q: Why BPF has implicit prologue and epilogue?
149144
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
150145
A: Because architectures like sparc have register windows and in general

Documentation/bpf/standardization/instruction-set.rst

Lines changed: 79 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -154,24 +154,27 @@ otherwise identical operations.
154154
The 'code' field encodes the operation as below, where 'src' and 'dst' refer
155155
to the values of the source and destination registers, respectively.
156156

157-
======== ===== ==========================================================
158-
code value description
159-
======== ===== ==========================================================
160-
BPF_ADD 0x00 dst += src
161-
BPF_SUB 0x10 dst -= src
162-
BPF_MUL 0x20 dst \*= src
163-
BPF_DIV 0x30 dst = (src != 0) ? (dst / src) : 0
164-
BPF_OR 0x40 dst \|= src
165-
BPF_AND 0x50 dst &= src
166-
BPF_LSH 0x60 dst <<= (src & mask)
167-
BPF_RSH 0x70 dst >>= (src & mask)
168-
BPF_NEG 0x80 dst = -dst
169-
BPF_MOD 0x90 dst = (src != 0) ? (dst % src) : dst
170-
BPF_XOR 0xa0 dst ^= src
171-
BPF_MOV 0xb0 dst = src
172-
BPF_ARSH 0xc0 sign extending dst >>= (src & mask)
173-
BPF_END 0xd0 byte swap operations (see `Byte swap instructions`_ below)
174-
======== ===== ==========================================================
157+
======== ===== ======= ==========================================================
158+
code value offset description
159+
======== ===== ======= ==========================================================
160+
BPF_ADD 0x00 0 dst += src
161+
BPF_SUB 0x10 0 dst -= src
162+
BPF_MUL 0x20 0 dst \*= src
163+
BPF_DIV 0x30 0 dst = (src != 0) ? (dst / src) : 0
164+
BPF_SDIV 0x30 1 dst = (src != 0) ? (dst s/ src) : 0
165+
BPF_OR 0x40 0 dst \|= src
166+
BPF_AND 0x50 0 dst &= src
167+
BPF_LSH 0x60 0 dst <<= (src & mask)
168+
BPF_RSH 0x70 0 dst >>= (src & mask)
169+
BPF_NEG 0x80 0 dst = -dst
170+
BPF_MOD 0x90 0 dst = (src != 0) ? (dst % src) : dst
171+
BPF_SMOD 0x90 1 dst = (src != 0) ? (dst s% src) : dst
172+
BPF_XOR 0xa0 0 dst ^= src
173+
BPF_MOV 0xb0 0 dst = src
174+
BPF_MOVSX 0xb0 8/16/32 dst = (s8,s16,s32)src
175+
BPF_ARSH 0xc0 0 sign extending dst >>= (src & mask)
176+
BPF_END 0xd0 0 byte swap operations (see `Byte swap instructions`_ below)
177+
======== ===== ============ ==========================================================
175178

176179
Underflow and overflow are allowed during arithmetic operations, meaning
177180
the 64-bit or 32-bit value will wrap. If eBPF program execution would
@@ -198,33 +201,44 @@ where '(u32)' indicates that the upper 32 bits are zeroed.
198201

199202
dst = dst ^ imm32
200203

201-
Also note that the division and modulo operations are unsigned. Thus, for
202-
``BPF_ALU``, 'imm' is first interpreted as an unsigned 32-bit value, whereas
203-
for ``BPF_ALU64``, 'imm' is first sign extended to 64 bits and the result
204-
interpreted as an unsigned 64-bit value. There are no instructions for
205-
signed division or modulo.
204+
Note that most instructions have instruction offset of 0. But three instructions
205+
(BPF_SDIV, BPF_SMOD, BPF_MOVSX) have non-zero offset.
206+
207+
The devision and modulo operations support both unsigned and signed flavors.
208+
For unsigned operation (BPF_DIV and BPF_MOD), for ``BPF_ALU``, 'imm' is first
209+
interpreted as an unsigned 32-bit value, whereas for ``BPF_ALU64``, 'imm' is
210+
first sign extended to 64 bits and the result interpreted as an unsigned 64-bit
211+
value. For signed operation (BPF_SDIV and BPF_SMOD), for ``BPF_ALU``, 'imm' is
212+
interpreted as a signed value. For ``BPF_ALU64``, the 'imm' is sign extended
213+
from 32 to 64 and interpreted as a signed 64-bit value.
214+
215+
Instruction BPF_MOVSX does move operation with sign extension.
216+
``BPF_ALU | MOVSX`` sign extendes 8-bit and 16-bit into 32-bit and upper 32-bit are zeroed.
217+
``BPF_ALU64 | MOVSX`` sign extends 8-bit, 16-bit and 32-bit into 64-bit.
206218

207219
Shift operations use a mask of 0x3F (63) for 64-bit operations and 0x1F (31)
208220
for 32-bit operations.
209221

210222
Byte swap instructions
211223
~~~~~~~~~~~~~~~~~~~~~~
212224

213-
The byte swap instructions use an instruction class of ``BPF_ALU`` and a 4-bit
214-
'code' field of ``BPF_END``.
225+
The byte swap instructions use instruction classes of ``BPF_ALU`` and ``BPF_ALU64``
226+
and a 4-bit 'code' field of ``BPF_END``.
215227

216228
The byte swap instructions operate on the destination register
217229
only and do not use a separate source register or immediate value.
218230

219-
The 1-bit source operand field in the opcode is used to select what byte
220-
order the operation convert from or to:
231+
For ``BPF_ALU``, the 1-bit source operand field in the opcode is used to select what byte
232+
order the operation convert from or to. For ``BPF_ALU64``, the 1-bit source operand
233+
field in the opcode is not used and must be 0.
221234

222-
========= ===== =================================================
223-
source value description
224-
========= ===== =================================================
225-
BPF_TO_LE 0x00 convert between host byte order and little endian
226-
BPF_TO_BE 0x08 convert between host byte order and big endian
227-
========= ===== =================================================
235+
========= ========= ===== =================================================
236+
class source value description
237+
========= ========= ===== =================================================
238+
BPF_ALU BPF_TO_LE 0x00 convert between host byte order and little endian
239+
BPF_ALU BPF_TO_BE 0x08 convert between host byte order and big endian
240+
BPF_ALU64 BPF_TO_LE 0x00 do byte swap unconditionally
241+
========= ========= ===== =================================================
228242

229243
The 'imm' field encodes the width of the swap operations. The following widths
230244
are supported: 16, 32 and 64.
@@ -239,6 +253,12 @@ Examples:
239253

240254
dst = htobe64(dst)
241255

256+
``BPF_ALU64 | BPF_TO_LE | BPF_END`` with imm = 16/32/64 means::
257+
258+
dst = bswap16 dst
259+
dst = bswap32 dst
260+
dst = bswap64 dst
261+
242262
Jump instructions
243263
-----------------
244264

@@ -249,7 +269,8 @@ The 'code' field encodes the operation as below:
249269
======== ===== === =========================================== =========================================
250270
code value src description notes
251271
======== ===== === =========================================== =========================================
252-
BPF_JA 0x0 0x0 PC += offset BPF_JMP only
272+
BPF_JA 0x0 0x0 PC += offset BPF_JMP class
273+
BPF_JA 0x0 0x0 PC += imm BPF_JMP32 class
253274
BPF_JEQ 0x1 any PC += offset if dst == src
254275
BPF_JGT 0x2 any PC += offset if dst > src unsigned
255276
BPF_JGE 0x3 any PC += offset if dst >= src unsigned
@@ -278,6 +299,16 @@ Example:
278299

279300
where 's>=' indicates a signed '>=' comparison.
280301

302+
``BPF_JA | BPF_K | BPF_JMP32`` (0x06) means::
303+
304+
gotol +imm
305+
306+
where 'imm' means the branch offset comes from insn 'imm' field.
307+
308+
Note there are two flavors of BPF_JA instrions. BPF_JMP class permits 16-bit jump offset while
309+
BPF_JMP32 permits 32-bit jump offset. A >16bit conditional jmp can be converted to a <16bit
310+
conditional jmp plus a 32-bit unconditional jump.
311+
281312
Helper functions
282313
~~~~~~~~~~~~~~~~
283314

@@ -320,6 +351,7 @@ The mode modifier is one of:
320351
BPF_ABS 0x20 legacy BPF packet access (absolute) `Legacy BPF Packet access instructions`_
321352
BPF_IND 0x40 legacy BPF packet access (indirect) `Legacy BPF Packet access instructions`_
322353
BPF_MEM 0x60 regular load and store operations `Regular load and store operations`_
354+
BPF_MEMSX 0x80 sign-extension load operations `Sign-extension load operations`_
323355
BPF_ATOMIC 0xc0 atomic operations `Atomic operations`_
324356
============= ===== ==================================== =============
325357

@@ -350,9 +382,20 @@ instructions that transfer data between a register and memory.
350382

351383
``BPF_MEM | <size> | BPF_LDX`` means::
352384

353-
dst = *(size *) (src + offset)
385+
dst = *(unsigned size *) (src + offset)
386+
387+
Where size is one of: ``BPF_B``, ``BPF_H``, ``BPF_W``, or ``BPF_DW`` and
388+
'unsigned size' is one of u8, u16, u32 and u64.
389+
390+
The ``BPF_MEMSX`` mode modifier is used to encode sign-extension load
391+
instructions that transfer data between a register and memory.
392+
393+
``BPF_MEMSX | <size> | BPF_LDX`` means::
394+
395+
dst = *(signed size *) (src + offset)
354396

355-
Where size is one of: ``BPF_B``, ``BPF_H``, ``BPF_W``, or ``BPF_DW``.
397+
Where size is one of: ``BPF_B``, ``BPF_H`` or ``BPF_W``, and
398+
'signed size' is one of s8, s16 and s32.
356399

357400
Atomic operations
358401
-----------------

arch/x86/net/bpf_jit_comp.c

Lines changed: 117 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -701,6 +701,38 @@ static void emit_mov_reg(u8 **pprog, bool is64, u32 dst_reg, u32 src_reg)
701701
*pprog = prog;
702702
}
703703

704+
static void emit_movsx_reg(u8 **pprog, int num_bits, bool is64, u32 dst_reg,
705+
u32 src_reg)
706+
{
707+
u8 *prog = *pprog;
708+
709+
if (is64) {
710+
/* movs[b,w,l]q dst, src */
711+
if (num_bits == 8)
712+
EMIT4(add_2mod(0x48, src_reg, dst_reg), 0x0f, 0xbe,
713+
add_2reg(0xC0, src_reg, dst_reg));
714+
else if (num_bits == 16)
715+
EMIT4(add_2mod(0x48, src_reg, dst_reg), 0x0f, 0xbf,
716+
add_2reg(0xC0, src_reg, dst_reg));
717+
else if (num_bits == 32)
718+
EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x63,
719+
add_2reg(0xC0, src_reg, dst_reg));
720+
} else {
721+
/* movs[b,w]l dst, src */
722+
if (num_bits == 8) {
723+
EMIT4(add_2mod(0x40, src_reg, dst_reg), 0x0f, 0xbe,
724+
add_2reg(0xC0, src_reg, dst_reg));
725+
} else if (num_bits == 16) {
726+
if (is_ereg(dst_reg) || is_ereg(src_reg))
727+
EMIT1(add_2mod(0x40, src_reg, dst_reg));
728+
EMIT3(add_2mod(0x0f, src_reg, dst_reg), 0xbf,
729+
add_2reg(0xC0, src_reg, dst_reg));
730+
}
731+
}
732+
733+
*pprog = prog;
734+
}
735+
704736
/* Emit the suffix (ModR/M etc) for addressing *(ptr_reg + off) and val_reg */
705737
static void emit_insn_suffix(u8 **pprog, u32 ptr_reg, u32 val_reg, int off)
706738
{
@@ -779,6 +811,29 @@ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
779811
*pprog = prog;
780812
}
781813

814+
/* LDSX: dst_reg = *(s8*)(src_reg + off) */
815+
static void emit_ldsx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
816+
{
817+
u8 *prog = *pprog;
818+
819+
switch (size) {
820+
case BPF_B:
821+
/* Emit 'movsx rax, byte ptr [rax + off]' */
822+
EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x0F, 0xBE);
823+
break;
824+
case BPF_H:
825+
/* Emit 'movsx rax, word ptr [rax + off]' */
826+
EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x0F, 0xBF);
827+
break;
828+
case BPF_W:
829+
/* Emit 'movsx rax, dword ptr [rax+0x14]' */
830+
EMIT2(add_2mod(0x48, src_reg, dst_reg), 0x63);
831+
break;
832+
}
833+
emit_insn_suffix(&prog, src_reg, dst_reg, off);
834+
*pprog = prog;
835+
}
836+
782837
/* STX: *(u8*)(dst_reg + off) = src_reg */
783838
static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
784839
{
@@ -1028,9 +1083,14 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
10281083

10291084
case BPF_ALU64 | BPF_MOV | BPF_X:
10301085
case BPF_ALU | BPF_MOV | BPF_X:
1031-
emit_mov_reg(&prog,
1032-
BPF_CLASS(insn->code) == BPF_ALU64,
1033-
dst_reg, src_reg);
1086+
if (insn->off == 0)
1087+
emit_mov_reg(&prog,
1088+
BPF_CLASS(insn->code) == BPF_ALU64,
1089+
dst_reg, src_reg);
1090+
else
1091+
emit_movsx_reg(&prog, insn->off,
1092+
BPF_CLASS(insn->code) == BPF_ALU64,
1093+
dst_reg, src_reg);
10341094
break;
10351095

10361096
/* neg dst */
@@ -1134,15 +1194,26 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
11341194
/* mov rax, dst_reg */
11351195
emit_mov_reg(&prog, is64, BPF_REG_0, dst_reg);
11361196

1137-
/*
1138-
* xor edx, edx
1139-
* equivalent to 'xor rdx, rdx', but one byte less
1140-
*/
1141-
EMIT2(0x31, 0xd2);
1197+
if (insn->off == 0) {
1198+
/*
1199+
* xor edx, edx
1200+
* equivalent to 'xor rdx, rdx', but one byte less
1201+
*/
1202+
EMIT2(0x31, 0xd2);
11421203

1143-
/* div src_reg */
1144-
maybe_emit_1mod(&prog, src_reg, is64);
1145-
EMIT2(0xF7, add_1reg(0xF0, src_reg));
1204+
/* div src_reg */
1205+
maybe_emit_1mod(&prog, src_reg, is64);
1206+
EMIT2(0xF7, add_1reg(0xF0, src_reg));
1207+
} else {
1208+
if (BPF_CLASS(insn->code) == BPF_ALU)
1209+
EMIT1(0x99); /* cdq */
1210+
else
1211+
EMIT2(0x48, 0x99); /* cqo */
1212+
1213+
/* idiv src_reg */
1214+
maybe_emit_1mod(&prog, src_reg, is64);
1215+
EMIT2(0xF7, add_1reg(0xF8, src_reg));
1216+
}
11461217

11471218
if (BPF_OP(insn->code) == BPF_MOD &&
11481219
dst_reg != BPF_REG_3)
@@ -1262,6 +1333,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
12621333
break;
12631334

12641335
case BPF_ALU | BPF_END | BPF_FROM_BE:
1336+
case BPF_ALU64 | BPF_END | BPF_FROM_LE:
12651337
switch (imm32) {
12661338
case 16:
12671339
/* Emit 'ror %ax, 8' to swap lower 2 bytes */
@@ -1370,9 +1442,17 @@ st: if (is_imm8(insn->off))
13701442
case BPF_LDX | BPF_PROBE_MEM | BPF_W:
13711443
case BPF_LDX | BPF_MEM | BPF_DW:
13721444
case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
1445+
/* LDXS: dst_reg = *(s8*)(src_reg + off) */
1446+
case BPF_LDX | BPF_MEMSX | BPF_B:
1447+
case BPF_LDX | BPF_MEMSX | BPF_H:
1448+
case BPF_LDX | BPF_MEMSX | BPF_W:
1449+
case BPF_LDX | BPF_PROBE_MEMSX | BPF_B:
1450+
case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
1451+
case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
13731452
insn_off = insn->off;
13741453

1375-
if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
1454+
if (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
1455+
BPF_MODE(insn->code) == BPF_PROBE_MEMSX) {
13761456
/* Conservatively check that src_reg + insn->off is a kernel address:
13771457
* src_reg + insn->off >= TASK_SIZE_MAX + PAGE_SIZE
13781458
* src_reg is used as scratch for src_reg += insn->off and restored
@@ -1415,8 +1495,13 @@ st: if (is_imm8(insn->off))
14151495
start_of_ldx = prog;
14161496
end_of_jmp[-1] = start_of_ldx - end_of_jmp;
14171497
}
1418-
emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
1419-
if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
1498+
if (BPF_MODE(insn->code) == BPF_PROBE_MEMSX ||
1499+
BPF_MODE(insn->code) == BPF_MEMSX)
1500+
emit_ldsx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
1501+
else
1502+
emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
1503+
if (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
1504+
BPF_MODE(insn->code) == BPF_PROBE_MEMSX) {
14201505
struct exception_table_entry *ex;
14211506
u8 *_insn = image + proglen + (start_of_ldx - temp);
14221507
s64 delta;
@@ -1730,16 +1815,24 @@ st: if (is_imm8(insn->off))
17301815
break;
17311816

17321817
case BPF_JMP | BPF_JA:
1733-
if (insn->off == -1)
1734-
/* -1 jmp instructions will always jump
1735-
* backwards two bytes. Explicitly handling
1736-
* this case avoids wasting too many passes
1737-
* when there are long sequences of replaced
1738-
* dead code.
1739-
*/
1740-
jmp_offset = -2;
1741-
else
1742-
jmp_offset = addrs[i + insn->off] - addrs[i];
1818+
case BPF_JMP32 | BPF_JA:
1819+
if (BPF_CLASS(insn->code) == BPF_JMP) {
1820+
if (insn->off == -1)
1821+
/* -1 jmp instructions will always jump
1822+
* backwards two bytes. Explicitly handling
1823+
* this case avoids wasting too many passes
1824+
* when there are long sequences of replaced
1825+
* dead code.
1826+
*/
1827+
jmp_offset = -2;
1828+
else
1829+
jmp_offset = addrs[i + insn->off] - addrs[i];
1830+
} else {
1831+
if (insn->imm == -1)
1832+
jmp_offset = -2;
1833+
else
1834+
jmp_offset = addrs[i + insn->imm] - addrs[i];
1835+
}
17431836

17441837
if (!jmp_offset) {
17451838
/*

0 commit comments

Comments
 (0)