Skip to content

Commit cd8bc74

Browse files
committed
fix(test_render_abstract.py): abstract render logic changes error
1 parent 225f3fa commit cd8bc74

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

tests/test_render_abstract.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ def test_one_abstract1(self):
3333
self.renderer.render_abstract(file_mock)
3434
file_mock.assert_called_with(file_mock, "w")
3535
file_mock().write.assert_called_with(
36-
"""\\documentclass{article}\\begin{document}\\begin{abstract}{\\color{Abstract_color}\n This is the content of abstract.\n }\\end{abstract}\\end{document}"""
36+
"""\\documentclass{article}\\begin{document}{\\begin{abstract}\\color{Abstract_color}\n This is the content of abstract.\n \\end{abstract}}\\end{document}"""
3737
)
3838

3939
def test_one_abstract2(self):
@@ -45,5 +45,5 @@ def test_one_abstract2(self):
4545
self.renderer.render_abstract(file_mock)
4646
file_mock.assert_called_with(file_mock, "w")
4747
file_mock().write.assert_called_with(
48-
"\\begin{abstract}{\\color{Abstract_color}\nWe consider the projection onto the $\\ell_{1, \\infty}$ norm ball for solving group sparse optimization problems arising in multi-task learning. \nBased on the primal-dual gradient method (PDG), we present a novel primal-dual Newton method, which updates the primal and dual iterates incrementally with Newton steps. \nWe exploit the problem-inherent structure so that the closed-form for the primal-dual Newton steps can be derived easily. \nCompared with existing algorithms, our approach does not need to maintain the primal feasibility, nor require the computation of the $\\ell_1$ ball projection subproblems. \nWe prove that our algorithm terminates after a finite number of iterations. \nNumerical simulations on synthetic and real-world data show that our proposed method is faster than the previous state-of-the-art.\n\n}\\end{abstract}"
48+
"{\\begin{abstract}\\color{Abstract_color}\nWe consider the projection onto the $\\ell_{1, \\infty}$ norm ball for solving group sparse optimization problems arising in multi-task learning. \nBased on the primal-dual gradient method (PDG), we present a novel primal-dual Newton method, which updates the primal and dual iterates incrementally with Newton steps. \nWe exploit the problem-inherent structure so that the closed-form for the primal-dual Newton steps can be derived easily. \nCompared with existing algorithms, our approach does not need to maintain the primal feasibility, nor require the computation of the $\\ell_1$ ball projection subproblems. \nWe prove that our algorithm terminates after a finite number of iterations. \nNumerical simulations on synthetic and real-world data show that our proposed method is faster than the previous state-of-the-art.\n\n\\end{abstract}}"
4949
)

0 commit comments

Comments
 (0)