Integration of TransformedTargetRegressor in Cross-validation, GridSearchCV and Nested cross-validation

Dear course team, thank you very much for delivering this excellent MOOC on Scikit-Learn. I have 3 related questions about TransformedTargetRegressor meta-estimator. Let’s assume that I have a skewed target variable such as in the Ames house prices dataset and I am applying a logarithmic function to linearize the target variable. More specifically, I am using TransformedTargetRegressor and its arguments are as follow:

model_to_tune = TransformedTargetRegressor(
    regressor=HistGradientBoostingRegressor(),
    func=np.log1p,
    inverse_func=np.expm1
)

Question 1: In the cross validation, is it correct to assume that cross_val_score/cross_validate functions will log-transform the target variable then fit the model on the training samples and predictions will be provided on the original scale of the target variable by the inverse function?

Question 2: For each unique combination of hyperparameters, would be it correct to assume that GridSearchCV/RandomizedSearchCV estimator will log-transform the target variable, then fit the model on the training samples and predictions will be provided on the original scale of the target variable? As the last step, the test sets will be used to calculate the average accuracy score?

Question 3: Finally, if I want to have standard-deviation of the scores measured in the outer cross-validation of a nested cross-validation that uses TransformedTargetRegressor, can I assume that the inner GridSearchCV/RandomizedSearchCV will fit the model on the log scale but using only the inner-training samples and generalisation performance of the optimised models will be assessed by on the original scale by using the validation sets (inner-test sets)? Then optimised models will be refitted to concatenation of the inner-training sets and the validation sets on the log scale and the generalisation performance of the optimised models will be assessed on the original scale of the target variable in the outer cross-validation?

I hope my questions make sense.

Question 1: This is correct. One must make the evaluation on the original scale. Making the evaluation of the transformed scale would not be methodologically correct.

Question 2: This is indeed the same principle as in Question 1. The score will be computed on the original scale.

Question 3: Yes, all models are evaluated on the original scale whatever the level of cross-validation.
It is as well true that once the best parameters are found, an estimator will be refitted on the full set.

Thank you very much for the confirmation.