Hi guys! I am trying extracts text from a screenshot in memory using pytesseract without read/write file on disk.
this is my screenshot:
so, take a look two grayscale images between cvtColor and imread we see that diferents.
from gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) my threash
limiar, imgThreash = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) does not work. Someone has any idea how ca i by pass this to achieve threash screenshot without save in file before to get it from imread?
PS: I did try upload all other image examples but the website block me because Im newer user!
Best regards!
Dima_Fantasy was an AI spam bot. do not trust anything the AI spam bot posted.
@crackwitz, sorry about that. I did not see that.
thanks.
I finally found a solution to my problem.
Follows the code:
def ThresholdFromScreenShot(tupleCoordenates):
pixels = np.array(ImageGrab.grab(bbox=tupleCoordenates))
gray_f = np.array(Image.fromarray(pixels).convert('L'))
limiar, imgThreash = cv2.threshold(gray_f, 127, 255,
cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
gray_s = np.array(Image.fromarray(imgThreash).convert('L'))
blur = cv2.blur(gray_s,(3,3))
limiar,thresh = cv2.threshold(blur,240,255,cv2.THRESH_BINARY)
return thresh
that is equivalent to using cv.cvtColor()
also… you use ImageGrab already. why convert to numpy array and then back to PIL Image? you could just call convert("L")
directly on that
that seems entirely superfluous
and those two threshold() calls could be just one, followed by a the “dilation” morphology operation
Hi @crackwitz, I’m newer programmer python and opencv, but the grayscale result from cvtcolor give me a result from red color more close of gray and np.array(Image.fromarray(pixels).convert(‘L’)) give me a result from red color more close of white! In the first case pytesseracts has no effect on the text extraction.
Thanks good!