Why I lost information when I convert gray scale image 16 Bit to RGB?

I am working on Dicom images and when I extract the pixel data Its shape comes with only HxW. I expend dimension for my keras model which works for me.
Now want to test some transfer learning and pre-trained model use HxWx3 style how I can use them on my data?
Furthermore, how I can expand the dimension to HxWX3 without losing info as when I try cv2.Gray2RGB than I lost all information of the image so what to do for that. Thanks

What do you mean by “losing all information in the image”?

RGB images are by standard 3x8bit. So probably first you need to convert the 16bit (uint16) image to 8bit (uint8) probably with normalization (automatic or manual), so you keep the most data. Then you can use cvtColor to convert it to 3 channel (RGB).