Opencv ANN_MLP model xx.xml

I use ANN_MLP to train handwritten digit recognition, my network structure is 784 64 10.
I am trying to extract the weights of each layer of the network by getweight() function.
But the extracted two-layer weights are 785*64 and 65*10.
Does this mean there is one bias neuron per layer?
According to this idea, when I test the network later, I add a 1 to the front of the input 784 image pixel array, and then multiply it with the weight matrix. The latter is also calculated according to this idea, but the network recognition result is wrong.
So what I’m wondering now is whether the position where I added 1 to the pixel array of the input 784 corresponds to the weight multiplied later.

my code:


	for (i = 0; i < 785; i++)
	{
		if (i == 784)
			mnist_reshape[i] = 1;
		else
			mnist_reshape[i] = mnist[i];
	}

	//the first layer
	for (j = 0; j < 64; j++)
	{
		z1 = 0.0;
		// compute z for neuron
		for (k = 0; k < 785; k++)
			z1 += weight1[k][j] * mnist_reshape[k];
		output1[j] = z1;

	}
	Sigmoid(output1, 64);
//the second layer
	for (i = 0; i < 65; i++)
	{
		if (i == 64)
			output1_reshape[i] = 1;
		else
			output1_reshape[i] = output1[i];
	}

	for (j = 0; j < 10; j++)
	{
		// compute z for neuron
		z2 = 0.0;
		for (k = 0; k < 65; k++)
			z2 += weight2[k][j] * output1_reshape[k];
		output2[j] = z2;

	}
	Sigmoid(output2, 10);

	for (k = 0; k < 10; k++)
	{
		output_r[k] = output2[k];
		cout << "result: " << output_r[k] << endl;
	}

result:

anyone can help me? thanks