Lakehouse
Lakehouse is a unique architecture that incorporates the best features of a Data Lake repository, which is used to store unstructured and semi-structured data, and data warehouses, which are used to store structured tables used to create reports. That is, you can store all this data in one place and access it via PySpark or SQL language…
The data within the Lakehouse is stored in Delta Parquet files. Apache Spark uses Parquet files, and Microsoft has introduced versioning of such files within the Fabric environment (Delta).
A Lakehouse is created by first logging in to Microsoft Fabric. When the home screen appears, click on the Create button and then on OneLake catalog.
Click on the Add Content button. Then select the Lakehouse option.
Next, give the new object a name and a Workspace where you want it to be stored.
At the time of creation, Lakehouse contains two folders: Files (where you store structured or semi-structured files, e.g.: JPG, PNG, MP4, CSV, XML, JSON, etc.) and Tables (where tables are imported where structured data is stored).
Let’s import one arbitrary file! Click on Upload Files and then find the image Slavko.jpg. Click on Upload and it will appear in the Files folder. If you click on it, Microsoft Fabric will open this image and display it in a window.

This way you can import any file you want to save in your Lakehouse.
How to create tables? This can be achieved by importing the file first, in the same way as we did a moment ago. Let’s import the DBD file. CSV. When a file appears in the Files folder, you need to open its context menu, open the Load to Tables menu, then select the New Table option and give it a name, e.g. DBDT.

The above method is quite correct, but it is recommended that you still use the Dataflow Gen2 feature when importing so that you can additionally perform data transformation. Click on Get Data, then click on Dataflow Gen2. Load the Excel file DBD.xlsx, and then select the appropriate table in it.

Once the Editor is opened, it is possible to perform some more transformations to prepare the data for reporting.
In the lower-right corner, you’ll see, in the Data Destination field, that Microsoft Fabric has saved Dataflow Gen2 within your Lakehouse, but hasn’t created a table. Delete this setting. Reset the destination and specify that you want to create a table within the Lakehouse.
After this procedure is completed, the table, to which we have given the name Transactions, will appear in the Tables folder.
Every time you create a Lakehouse, an SQL analytics endpoint is automatically created.
If you open SQL analytics endpoint here you can see all the tables, views and other elements of Lakehouse. It allows you to write an SQL query with which you can extract the desired data.
Microsoft Fabric Lakehouse, among other things, enables the creation of shortcuts to external data sources without copying the data, but by accessing the same as if they were within the same tenant.