Title,Source,Year,Primarily for Serverless,Core Proposal Type,For the Edge or Fog Computing,Scope,Transparent to Functions,PoC Implementation,Serverless Environment,Experimental Evaluation,Performance Evaluation,Security Evaluation,Threat Model Defined,Open Source PoC,Open Source Location,Notes Se-Lambda: Securing Privacy-Sensitive Serverless Applications Using SGX Enclave,https://dx.doi.org/10.1007/978-3-030-01701-9_25,2018,Yes,Security,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,N/A,Yes,Yes,"No (although the authors say they did, we do not consider this as a proper security evaluation, as the authors only reported that the system defends against attack X, without any proof or any relevant data)",Yes,No,N/A,"The proposal is missing many details, which does not help detect whether the authors developed the proposal. There are also many grammatical problems, sometimes making reading difficult." Secure serverless computing using dynamic information flow control,https://dx.doi.org/10.1145/3276488,2018,Yes,Security,No,Network,No (the authors mention that minimal changes are required to the code),Yes,AWS Lambda & OpenWhisk,Yes,Yes,"No (the authors say they did, but at most, they say simulated code injection without demonstrating what they did in the paper nor showing the corresponding relevant results.)",Yes,Yes,https://github.com/kalevalp/trapeze,"The authors seem to put a lot of hope in the shim executing inside the same sandbox where the untrusted tenant function is running. However, that is usually far from reality, and a malicious function might attack the shim and use any label it wants, turning this mechanism upside down. Therefore, is there no way of achieving information flow control without having the label control at the shim (or any other entity inside the function sandbox)?" AccTEE: A WebAssembly-based Two-way Sandbox for Trusted Resource Accounting,https://dx.doi.org/10.1145/3361525.3361541,2019,No,Security,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,N/A,Yes,Yes,No,Yes,No,https://github.com/ibr-ds/AccTEE/tree/master,Is it a good idea to change the function's source code provided? CLEMMYS: Towards secure remote execution in FaaS,https://dx.doi.org/10.1145/3319647.3325835,2019,Yes,Security,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,OpenWhisk,Yes,Yes,"No (although the authors say they did, there is no data whatsoever; they only describe the general lines of what they supposedly did and the main findings)",Yes,No,N/A,"It appears that if a function changes its N value (index) in the algorithm to verify the chain integrity, it can impersonate any other function. Therefore, it seems that the verification of the execution order is not done correctly. Moreover, it seems the solution only works for the client who created the application, not for generic users (as explained at the end of section 4). Rollout attackers are also still possible (something that the authors discussed as an open issue)." Diggi: A secure framework for hosting native cloud functions with minimal trust,https://dx.doi.org/10.1109/TPS-ISA48467.2019.00012,2019,Yes,Security,No,Runtime,No (functions need to be prepared to work with this mechanism),Yes,N/A,Yes,Yes,No,Yes,No,N/A,"Integration with state-of-the-art Serverless platforms and orchestrators might be complex. Moreover, it works for natively compiled languages, while the most serverless-used languages are Python and JS, which are interpreted languages. It is crucial to have a baseline in throughput evaluation. Without it, understanding the real impact of the proposed system becomes significantly more challenging." S-FaaS: Trustworthy and accountable function-as-a-service using Intel SGX,https://dx.doi.org/10.1145/3338466.3358916,2019,Yes,Security,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,OpenWhisk,Yes,Yes,No,Yes,Yes,https://github.com/SSGAalto/sfaas,How do two functions communicate with each other? Is it using the same protocol? What is the performance impact of such a mechanism when many functions are intertwined? "Trust more, serverless",https://dx.doi.org/10.1145/3319647.3325825,2019,Yes,Security,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,N/A,Yes,Yes,No (we do not consider SLOC to be a proper security evaluation),Yes,No,N/A,"Will there not be problems with executing functions of different tenants inside the same enclave? The sandboxes presented by the authors seem to offer little protection between functions. Also, a function might be able to read the data that another function is handling because functions are not protected by their own enclave. Another point of concern is the potential resource wastage due to the continuous running of an enclave. The authors seem to have only tested the cold start without considering the overhead of enclave creation. This raises the question: Is it necessary to always have an enclave running? Could this not lead to resource wastage, especially if there are enclaves with no functions instantiated, causing the CSP to waste resources that are not being used? Moreover, the authors only proposed using enclaves with a JavaScript runtime. What is the feasibility of using other runtimes for other languages? Will that not force the CSP to have many enclaves for each supported language? And because of the previous set of questions, will this not contribute to a waste of resources?" ACE: Just-in-time Serverless Software Component Discovery Through Approximate Concrete Execution,https://dx.doi.org/10.1145/3429880.3430098,2020,Yes,Security,No,Function,"Yes (inferred information, not directly specified by the authors)",Yes,N/A,Yes,Yes,"Yes (but the authors only analyzed the precision, recall, and F1 of their proposed system)",No,No,https://github.com/peaclab/ACE,"The fingerprinting methodology is described as using the first 32 registers values of the final state of the execution in the created virtual machine. However, does this mean that ACE has to run all possible vulnerabilities and faulty functions that it knows to create the proper fingerprint database? Although feasible, it might take a lot of work, forcing the system to have all these functions to fingerprint them properly. Moreover, the system can only find known vulnerabilities quickly. The issue of unknown vulnerability protection is still present (this is not a problem of the proposed system, just a design characteristic; nevertheless, it is an open question in research). " Firecracker: Lightweight virtualization for serverless applications,https://www.usenix.org/conference/nsdi20/presentation/agache,2020,Yes,Security & Performance,No,Runtime,Yes,Yes,N/A,Yes,Yes,No,No,Yes,https://firecracker-microvm.github.io/,"The article's primary focus is not security. That said, the proposal is very sound, and it is a critical paper in this area, as Firecracker is used in the probably most used Serverless platform: AWS Lambda. The only complaints are that the authors did not provide any threat model and that the results could compare Firecracker with containerized environments to understand all its advantages." Valve: Securing Function Workflows on Serverless Computing Platforms,https://dx.doi.org/10.1145/3366423.3380173,2020,Yes,Security,No,Network,Yes (but the developer has to define the rules or at least update the automatically generated security policies),Yes,OpenFaaS,Yes,Yes,No,Yes,No,N/A,"What happens when the agent inside each container gets compromised? The attacker will simply have control over it and over the overall chain (it can change the current taint before contacting another function or change the taint validation at the beginning; not only that, it can change the cumulative taint, therefore changing everything and avoiding determined checks). Not only that but the system call analyzer can also be affected by this aspect. Running such an agent inside the same container as each function is probably a very bad idea. The way the rules are created seems a little weird. The controller initially profiles the application, generating a set of initial rules. But then, the developer himself is responsible for changing those rules to include new rules (restrictive or not), which means that he is responsible for the chain's security (this is not very service-oriented if the developer has such a big responsibility). It seems that the developer needs to make a blocklist to protect the application. This is hugely discouraged in cybersecurity and might pose serious risks. In the end, in the evaluation part, the authors explain that the container image increases in size because the image needs to have the Valve agent. What happens when the CSP needs to update the agent? Moreover, how is the agent able to use strace inside the container? Supposedly, containers do not have that kind of access (Kernel security restrictions). Does this mean that ALASTOR needs to use containers with more access to the Kernel than it should?" Workflow Integration Alleviates Identity and Access Management in Serverless Computing,https://dx.doi.org/10.1145/3427228.3427665,2020,Yes,Security,No,Network,Yes,Yes,OpenFaaS,Yes,Yes,No,Yes,No,https://bitbucket.org/sts-lab/will.iam/src/master/,"At the end of page 502, the authors state that their system requires the access control policy writer to provide a JSON configuration file with the policies. Is it not concerning that this is separated from the workflow that the developers might create? For example, the policy writer might make an error, and a function that was supposed to call another one is now blocked. Another potential issue is that the authors refer to a limitation of existing serverless systems as the problem of access control policy misconfiguration. However, the authors' system does not solve this limitation. If there is an error in the policies built with their system, the result might be availability failure (which is an essential component of the CIA triad) or might conduct to a too wide open policy, which might allow connections that were not supposed to happen. There is also a potential problem with the overall concept of defining through these kinds of policies the access to data that each user has. A developer might have a function that internally verifies the user identity and authorization level and, from that, decides in which database (or in which table in the same database) it needs to operate. However, if this verification is done at the workflow level, there is a potential problem: the function will not be able to function properly in some cases because of the lack of permissions for certain users, or the policy manager might create a data access policy to wide to allow the function to access the necessary data and, in this case, the proposed system does not introduce any protection. Once again, as with Valve, the authors are probably putting too much responsibility on the agent running inside each function. What happens when the function starts controlling the agent? Moreover, the system seems to only care about the mandatory policies at the beginning of execution. However, an attacker might be able to create an exploit that forces a function to execute another that it was not supposed to at the beginning. Therefore, how does the system behave in such scenarios, and how does it avoid these unwanted later executions?" Compiler-Assisted Semantic-Aware Encryption for Efficient and Secure Serverless Computing,https://dx.doi.org/10.1109/JIOT.2020.3031550,2021,Yes,Security,No,Data,"Yes (although it needs a special compiler that could be made transparently available (the authors did not discuss this part); moreover, it seems the developers need to provide a configuration file, but it was not discussed by the authors in the paper)",Yes,AWS Lambda,Yes,Yes,No,Yes,Yes,https://github.com/corelab-src/selectivecrypt,"First, it seems the authors did not consider that in Serverless, functions are usually combined in workflows and that when a function is triggered, usually a chain of functions is executed because of that trigger. These functions will most probably work on the data sent to the first one. Therefore, although the first function might not process a piece of data, any other in the trigger might do so. However, the system seems to only verify the data handling type for each function independently, not supporting chains of multiple functions. Moreover, the system is not generic. It is applied to IoT. Could it not be generalized to the cloud, where the compiler could be executed automatically within the CI/CD that proceeds the function installation? Nevertheless, a potential issue with its usage in a more general scenario is its requirement for the runtime on the user end. Moreover, although the authors compared their system with mechanisms that only used HE without symmetric encryption, an important comparison would be to compare it with not using HE at all (for example, with systems that only use symmetric encryption). This is because most offerings nowadays do not use HE at all, so comparing such systems would be interesting and provide a more relevant performance impact analysis for most cloud users (at least, in a general context)." Concentrated isolation for container networks toward application-aware sandbox tailoring,https://dx.doi.org/10.1145/3468737.3494092,2021,No,Security,No,Network,"Yes (inferred information, not directly specified by the authors)",Yes,N/A,Yes,Yes,"No (although the authors say they did, there is no data whatsoever; they only describe the general lines of what they supposedly did and the main findings)",No,No,N/A,"Serverless environments usually use clusters of multiple servers, where functions can be installed on different hosts. Is the system able to adapt to such conditions? The authors seem to refer to the fact that using NICs and an AC/rooting mechanism inside a hypervisor will harden these environments against attackers who are able to escape their container and control the underlying host. Where did the authors get this information? If the attacker is able to reach the OS, it will probably also be able to control everything else running within it." Confidential serverless made efficient with plug-in enclaves,https://dx.doi.org/10.1109/ISCA52012.2021.00032,2021,Yes,Security enabler,No,Runtime,"Yes (although it is not the main point of the proposal, it is focused on improving the performance of enclaves for Serverless; nevertheless, the authors say they used an in-house version of LibOS, so it seems the mechanism is transparent to any functions)",Yes,N/A,Yes,Yes,No,Yes,No,N/A,"The authors propose some hardware or microcode modifications related to SGX. What is the probability of the proposal being integrated into Intel SGX? And if there is that possibility, are they not as disruptive as the authors argue they are not?" "Placing FaaS in the Fog, securely",https://ceur-ws.org/Vol-2940/paper15.pdf,2021,Yes,Security,Yes,Scheduling,Yes,Yes,FaaS2Fog (created by the authors),No,No,No,No,Yes,https://github.com/di-unipi-socc/FaaS2Fog,"First, it is somewhat difficult to assess the overall quality of the proposal because there is no threat model, nor there are any tests (security or performance related). In any case, the system can only handle simple serverless workflows, where the same trigger always leads to the same flow of functions to be executed. Therefore, there is a need to study how this system could handle more complex workflows (this was also pointed out by the authors in the conclusion). Another potential issue is with function placement. What if a function does not have a node where it can be launched? Will the system incur an availability problem?" Scalable Memory Protection in the PENGLAI Enclave,https://www.usenix.org/conference/osdi21/presentation/feng,2021,No,Security enabler,No,Runtime,"No (after some research, we found that one would need an SDK to develop a function tailored for this environment: https://github.com/Penglai-Enclave/penglai-sdk/tree/master)",Yes,N/A,Yes,Yes,No,Yes,Yes,https://github.com/Penglai-Enclave,"Most comments we have related to this work stem from how one could integrate such a system with the current state-of-the-art Serverless platforms and application definitions (e.g., using workflows). This comment is even more reasonable if we consider that this work focused solely on RISC-V architectures, while most of the current Serverless platforms were designed considering x86 architectures." A Fully Decentralized Architecture for Access Control Verification in Serverless Environments,https://dx.doi.org/10.1109/ISCC55528.2022.9912764,2022,Yes,Security,No,Network,Yes,Yes,N/A,Yes,Yes,No,No,No,N/A,"The authors claimed that their system is ""fully"" decentralized. However, this seems to be a too bold claim, as the only decentralized component is the rule engine. The registry and the controller continue to be centralized. This means that some of the issues the authors referred to at the beginning concerning centralized architectures are still true: if the controller gets attacked, in this case, it will write wrong rules to the registry, therefore impacting the rule engine and, therefore, the access control will not be properly enforced; the same goes for the registry, if it is attacked, will impact all rule engines. Moreover, the paper lacks a threat model, so it is very difficult to understand the conditions and what, in fact, the authors are protecting. Furthermore, what is the executor node? Is it a compute node, such as a server (physical or virtual)? Or is it another abstraction? The evaluation also seems to lack some quality. For the end-to-end latency of the multiple components, there is no baseline with which to compare the results. Then, for the comparison of the end-to-end latency between centralized and decentralized access control, the authors did not specify the conditions of their centralized system. Therefore, it is difficult to draw real conclusions from these results." ALASTOR: Reconstructing the Provenance of Serverless Intrusions,https://www.usenix.org/conference/usenixsecurity22/presentation/datta,2022,Yes,Security,No,Network,Yes,Yes,OpenFaaS,Yes,Yes,Yes (it is very preliminary),Yes,Yes,https://bitbucket.org/sts-lab/alastor/src/master/,"It seems that all checks are done manually by the developer. Is the developer supposed to know the ""normal"" graph to distinguish it from the anomalous one? Is that not very difficult, considering that we can have monstrous graphs? Because authors are tracing all syscalls, any function with heavy dependence on I/O will suffer a lot (the authors verified this) As with Valve and WILL.IAM, there is an agent inside each container. What happens if it is compromised? Furthermore, the controller is the one asking for graph updates for each agent based on a corn job. If an attacker is able to quickly compromise the agent, the controller may receive erroneous and misleading data. Moreover, how is the agent able to use strace inside the container? Supposedly, containers do not have that kind of access (Kernel security restrictions). Does this mean that ALASTOR needs to use containers with more access to the Kernel than it should?" An Innovative Blockchain-Based Orchestrator for Osmotic Computing,https://dx.doi.org/10.1007/s10723-021-09579-7,2022,Yes,Security,Yes,Data,Yes,Yes,OpenFaaS,Yes,Yes,No,No,No,N/A,"The work is somewhat simple. The authors propose the usage of blockchain networks to protect the integrity of application configurations. However, there are a lot of details that are missing. For instance, in the motivation, the authors also talk about using asymmetric encryption to protect confidentiality. However, they did not expand on this matter. They also talk about availability but do not expand on the remaining part of the paper, and some results seem to contradict the availability of the blockchain network (or at least when a user wants to save a new configuration). Then, the VPN usage is also missing a lot of details. The authors only say they can use a VPN between the edge device and the endpoint. The results also lack a lot of detail. For instance, when the authors compare the performance of publishing or retrieving service configuration, what does the number of requests mean? That the same user is sending at once 1000 requests to publish a configuration? It does not make much sense, it seems. Furthermore, the implementation details in conjunction with the analysis are somewhat strange. For example, the authors say they used OpenFaaS as the Serverless orchestrator. But then, where is OpenFaaS used in the analysis? And in which machine is it installed (the controller)? Apart from the missing details, there are also other questions regarding the system itself. For instance, the authors state that when a configuration is saved, it has a corresponding service owner ID and a device ID. Therefore, is the user directly deciding the device where each function will be installed? Is it not that controlled at the orchestration level? That seems strange since Serverless is supposed to abstract management and orchestration from the client. Moreover, how does the system verify if the service owner ID is, in fact, from the real owner? It seems that there is no control of that variable, and, consequently, an attacker might impersonate a client and attack the integrity of the configurations. There is also the potential issue of the price for saving the configurations in the blockchain. The authors did not specify such an issue, and the configuration they used is rather small. However, how does the price vary for applications with many functions and intricate workflows?" Gringotts: Fast and Accurate Internal Denial-of-Wallet Detection for Serverless Computing,https://dx.doi.org/10.1145/3548606.3560629,2022,Yes,Security,No,Function,"Yes (although it is difficult to assess, as the authors did not specify that; they say their agent is a dynamically loaded library, but this may mean that the CSP is changing the source code, or that the developer has to include the library into the function; from these two options, we can not know which one the authors followed)",Yes,Knative,Yes,Yes,Yes (it is very preliminary),"Yes (they do not specify directly, but describe the attack they are trying to solve)",No,N/A,"This is a very complete work with many demonstrations. However, there are some potential issues and open questions. First, the proposed system, as other works (such as Valve), proposes using an internal agent inside each function. Although, in this case, it might not be as harmful as in the other works, as the attacker that tries to exhaust the resources of other functions does not have the functions running in the same sandbox as the one where the targeted function's agent is running, it may open another threat vector. This is because an attacker might use this system to maliciously alter the functioning of the agent running along its own functions and provide misleading information to the CSP. If the CSP uses this information as proof of a DoW attack, a crafter attacker might use this to its advantage by simply simulating a DoW attack that never happened and getting compensation from the CSP. This, of course, would depend on the CSP business model. Moreover, although the terms external and internal DoW seem to have been coined by these authors, internal DoW does not seem to be caused only by resource contention. If a function starts calling another one many times, is it not an internal DoW as well? Moreover, the authors also describe using perf_event_open() syscall in the agent. However, is this possible inside a ""normal"" container? Will this not need a container with more permissions than usual? Nevertheless, the most preoccupying potential issue with this method is the feasibility of these kinds of attacks. The authors showcased that they are possible. However, they seem to be based on the attacker also requesting resources for a long period, a period in which the targeted tenant will be billed more. Therefore, will this attack not be also very costly to the attacker? Moreover, co-locating the function within the same server where the targeted function is might also incur significant costs for the attacker since it is based on trial and error. Finally, dynamically loading the agent into the function opens potential issues: if it is forcefully included in the developer's source code, they are breaching the contract by changing the provided code (and potentially introducing a new threat vector); it can also be done by changing one of the libraries applications usually use, but this would incur in always needing to change their source code in any update and, once again, could introduce a new threat vector." MicroFaaS: Energy-efficient Serverless on Bare-metal Single-board Computers,https://dx.doi.org/10.23919/DATE54114.2022.9774688,2022,Yes,Security/Performance,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,N/A,Yes,Yes,No,No,Yes,https://github.com/peaclab/MicroFaaS,"First, the work is not solely focused on security but also on energy efficiency. Nevertheless, the work misses a threat model, which makes it difficult to understand the full extent of what the authors are trying to protect and in what conditions. In any case, as in many other works, this work does not consider, at least by now, that Serverless applications are a combination of functions. How will these small pieces of equipment be executed and scaled efficiently? Moreover, whether a function can execute multiple requests in the same SBC without restarting is not completely direct. If not, will it not be very inefficient? Also, does the system execute warm/cold environments? That is, are the SBCs always on or always powered off? In either case, problems can arise (if they are always powered off, starting them takes a long period; if they are always powered on, they are wasting energy unnecessarily). Finally, the results demonstrate that (not counting the cold start, it seems) the execution time is much worse when compared to execution inside a VM." QFaaS: Accelerating and Securing Serverless Cloud Networks with QUIC,https://dx.doi.org/10.1145/3542929.3563458,2022,Yes,Security enabler,No,Network,Yes,Yes,OpenFaaS,Yes,Yes,No,No,Yes,https://github.com/qfaas-project,"The work focuses more on performance enhancements than on security itself, which is understandable. Just a small remark: In Section 2.2, the authors mention that TLS is not enabled by default in OpenFaaS and other open-source platforms. However, this can be viewed as normal, as these projects are often used only for research and testing purposes that do not require security. The assumption that connections among functions always go through the API gateway does not hold true for every orchestrator nor for every serverless model. In reactive development models, where a function in a workflow is executed after the execution of another function (and receives its output as its input), these connections go through the function invoker, usually not going to the API gateway. Using QUIC 0 will depend a lot on the configuration of the function and on the function's usage through time, as it is only enabled for warm-started containers. Cold-started containers will always need to do the QUIC necessary handshake. Finally, in the test depicted in Figure 7, are functions in each experiment in the same condition? That is, are the same functions always installed in the same nodes among experiments?" QKD-Secure ETSI MEC,https://dx.doi.org/10.1109/WOLTE55422.2022.9882872,2022,Yes,Security,Yes,Network,Yes,No,N/A,No,No,No,No,No,N/A,"The work is rather simple, and only an architecture without much detail is given. The authors did not provide any proof of concept implementation or any evaluation. Moreover, it seems that the proposed mechanism could be applied to any communication between two parties since it does not depend on the Serverless environment or in MEC." Themis: A Secure Decentralized Framework for Microservice Interaction in Serverless Computing,https://dx.doi.org/10.1145/3538969.3538983,2022,Yes,Security,No,Network,"Yes (inferred information, not directly specified by the authors)",Yes,Themis (created by the authors),Yes,Yes,No,Yes,Yes,https://github.com/atlas-runtime/themis,"First, probably the most obvious question is how this system can be used with state-of-the-art serverless orchestrators and their functionalities, such as the definition of workflows. Nevertheless, the system might have other potential issues. First, the authors claim that their system is decentralized. However, they also propose that the decentralized ones use asymmetric keys to establish a secure communication channel and do not provide how the public keys are initially shared among entities (i.e., how node A gets node B's public key in order to use it later to establish a trusted connection between A and B). Most probably, this key distribution was done in a centralized manner, which might introduce an unexpected attack surface. Then, the authors propose a low-level protocol to establish the connection between nodes, which is very similar to mTLS. However, never explain why they did not use mTLS, a far more studied and supported protocol than the authors' proposal. Still, in the low-level protocol, the authors describe that after the secret symmetric key is established (using Diffie Hellman), nodes will start communicating, sending an encrypted message, a MAC of that message, and the node's identity (hash of its public key). However, the identity is sent in clear text because, with it, the receiver will be able to recognize where the message is coming from and use the associated secret key to decypher the message. However, what will happen if an attacker intercepts the message and changes the identity value? Will that not cause a DoS (because the receiver will not recognize the identity and, therefore, will not be able to respond)? Moreover, the authors did not describe how they manage inconsistencies across nodes, a problem that must be analyzed in decentralized systems. Considering the last two points, it seems the authors overlooked the problems that may impact availability in the system. Finally, the evaluation procedures are somewhat vague. What do the authors consider as their ""vanilla""? It would also be interesting to compare the system performance against a centralized Serverless orchestrator, such as one based on Kubernetes." "Type, pad, and place: Avoiding data leaks in Cloud-IoT FaaS orchestrations",https://dx.doi.org/10.1109/CCGrid54584.2022.00094,2022,Yes,Security,Yes,Scheduling,Yes,Yes,SecFaaS2Fog (created by the authors),No,No,No,"Yes (although the authors call it the ""Attacker model"")",Yes,https://github.com/di-unipi-socc/SecFaaS2Fog,"It seems there is still a possibility of an attacker being able to exploit the if conditions even with the padding of functions. The authors propose adding a dummy function after a function is executed in a true condition and a dummy function before the execution of the function is executed in a false condition. Therefore, it does not matter what the result of the if-condition is; the order of service access will always be the same. However, are the dummy functions executing for the same period as the real functions they are mocking? And do the service accesses result in the same amount of data for both mock and real accesses? Since the authors modeled an attacker as being able to monitor the system resources, the attacker is probably able to access such metrics. Therefore, through these side channels, the result of the if-condition may be inferred. Moreover, the padding of functions seems to be too much of a waste of resources. This is not only because the system is adding a new function in each if-branch execution but also because both real functions will request the same hardware and network resources (obtained by the union of the initial resources of both functions). What is the cost of this mechanism? Does this cost transfer to the application provider?" A Prototype for QKD-secure Serverless Computing with ETSI MEC,https://dx.doi.org/10.1109/SMARTCOMP58114.2023.00043,2023,Yes,Security,Yes,Network,"Yes (inferred information, not directly specified by the authors)",Yes,OpenWhisk,No,No,No,No,No,N/A,"The work is rather simple, and only an architecture without much detail is given. The authors did not provide any evaluation. Moreover, it seems that the proposed mechanism could be applied to any communication between two parties since it does not depend on the Serverless environment or in MEC." Accelerating Extra Dimensional Page Walks for Confidential Computing,https://dx.doi.org/10.1145/3613424.3614293,2023,No,Security enabler,No,Runtime,"No (after some research, we found that one would need an SDK to develop a function tailored for this environment: https://github.com/Penglai-Enclave/penglai-sdk/tree/master)",Yes,N/A,Yes,Yes,No,No,Yes,https://github.com/Penglai-Enclave/Penglai-Enclave-sPMP,"Most comments we have related to this work stem from how one could integrate such a system with the current state-of-the-art Serverless platforms and application definitions (e.g., using workflows). This comment is even more reasonable if we consider that this work focused solely on RISC-V architectures, while most of the current Serverless platforms were designed considering x86 architectures." Always-On Recording Framework for Serverless Computations: Opportunities and Challenges,https://dx.doi.org/10.1145/3592533.3592810,2023,Yes,Security,No,Function,Yes,Yes,OpenFaaS,Yes,Yes,No,No,No,N/A,"It is a relatively simple work since it is a preliminary proposal, it seems, to launch the questions about the possibility of using such a system. Moreover, it is also not focused exclusively on security. The only concern that could be pointed to is the agent's execution inside the same sandbox as the function is running." An approach for modeling the operational requirements of FaaS applications for optimal deployment,https://dx.doi.org/10.1016/j.infsof.2023.107242,2023,Yes,Security and more,Yes,Scheduling,"Yes (however, as far as we could understand, if the scheduler outputs that function fusion is the best option, it seems the developer must take action to fuse those functions together)",Yes,AWS Lambda (with AWS Greengrass for the edge),Yes,Yes,No,No,Yes,https://github.com/Benedikt92/MasterThesisRepo,"It seems that the system might not scale very well when we have very long workflows. Right at phase 1, when the system creates all potential workflow configurations, where the total number before filtering is given by 2^n, with n equal to the number of functions in the workflow. Let's imagine a workflow has 50 functions. Will the system compute 1125899906842624 different alternatives? It seems like a lot of computing power is needed for this phase alone. The other phases might also pose scalability problems. There are missing details on how the system profiles functions. What if a function execution depends on the input? Or what if a function uses a database to store content? How will the profile handle that? Finally, one last comment is that the privacy considerations are very simple. Nevertheless, this paper was not about security or privacy but about scheduling, where one of the variables was privacy. In any case, it might be interesting to study how privacy can affect the scheduling of functions between the cloud and the edge in further studies. (the authors also pointed out some other limitations at the end of the paper)" Declarative Secure Placement of FaaS Orchestrations in the Cloud-Edge Continuum,https://dx.doi.org/10.3390/electronics12061332,2023,Yes,Security,Yes,Scheduling,Yes,Yes,SecFaaS2Fog (created by the authors),Yes,Yes,No,"Yes (although the authors called it ""The Attacker Model and Security Constraints"")",Yes,https://github.com/di-unipi-socc/SecFaaS2Fog,"This work is the sum of the two previous ones with a delta of tests and results. Therefore, there is not much more to say. One of the pointed potential problems in the previous work was that the padding could introduce a lot of performance setbacks. In this work, the authors only tested two applications, each with only one conditional statement. Therefore, it is still difficult to understand that impact." Formally Verifying Function Scheduling Properties in Serverless Applications,https://dx.doi.org/10.1109/MITP.2023.3333071,2023,Yes,Security,Yes,Scheduling,Yes,"No (the authors never mention anything, but most of the text seems to not suggest its existence)",N/A,No,No,No,No,No,N/A,"This seems to be a very preliminary study or a highlight of an existing one. It does not have any evaluation, and, in any case, it is only a language used to define scheduling policies related to data location. Therefore, there are many questions related to how the scheduling process will take place and how the defined rules will be applied in practice. Moreover, how does the formal verification use the PPDL scale? Also, is it not a hard task for developers to define all the necessary policies for all functions of a big workflow? And will not it be error prone?" Groundhog: Efficient Request Isolation in FaaS,https://dx.doi.org/10.1145/3552326.3567503,2023,Yes,Security,No,Runtime,Yes,Yes,OpenWhisk,Yes,Yes,No,Yes,Yes,https://groundhog.mpi-sws.org/,"As with past studies, this work once again includes an agent inside the function runtime. Although it might not be a problem for the specific problem the authors are trying to solve, an attacker that might compromise the runtime might disrupt the agent functioning, disabling the restoration process and, with this, give a false idea of security. But, once again, this was not the threat the authors tried to solve. In any case, in future works, if the agent runs outside the function sandbox, it could also be used to disable attacks that modify the function execution. Another potential problem is that the proposed agent needs access to the ptrace syscall. This is something not allowed in most containers and shall be configured by giving the capability SYS_PTRACE to the container and disabling the ptrace hardening. However, this might increase the attack surface of the container (https://wiki.ubuntu.com/SecurityTeam/Roadmap/KernelHardening#ptrace_Protection) and give a potential attacker more tools to conduct an attack than it should." Guarding Serverless Applications with Kalium,https://www.usenix.org/conference/usenixsecurity23/presentation/jegan,2023,Yes,Security,No,Network,Yes,Yes,"OpenFaaS (and AWS Lambda only for testing, but without the ""real"" Kalium)",Yes,Yes,No,Yes,Yes,https://www.usenix.org/system/files/usenixsecurity23-appendix-jegan.pdf,"It was not completely clear how Kalium profiles more complex applications. Do the developers need to have real users interacting with the application? How does Kalium make sure it has all the possible allowed flows? How many trials does it need to properly profile an application? Although the authors say it is more important to have zero false negatives, false positives are also bad because they may impact the availability of the application, which is one of the three pillars of the CIA triad. Nevertheless, the authors point out that their system might lack precision, but this continues to be concerning. As with Valve, how do the authors make sure they are blocking and observing all network-related system calls? The authors seem to only have considered two, but a crafted attacker might be able to use other system calls to achieve the same results. This is because this method resembles a blocklisting approach, where the security system only blocks some components of the environment, but the attacker might use other unexpected ones. Some other points lacking, but the authors pointed that out, is the fact that Kalium only allows single-threaded processes inside their runtime, but more concerning, Kalium only works for concurrent requests (probably if the same function execution receives more than one request, Kalium will not be able to differentiate among the different users triggering the flow). The LCP URLs proposal also seems to lack, as it applies a policy that is too wide to maintain performance." PrivFlow: Secure and Privacy Preserving Serverless Workflows on Cloud,https://dx.doi.org/10.1109/CCGrid57682.2023.00049,2023,Yes,Security,No,Network,Yes,Yes,OpenFaaS,Yes,Yes,No,Yes,No,N/A,"The authors seem to not consider the fact that bloom filters have false positives. If there is a false positive, is the attacker able to conduct one of the attacks the system is promised to protect? Although the authors analyzed the optimal design for a bloom filter, the size they consider is too big to be considered in real-world scenarios. There is also the issue of how the serverless orchestrator verifies the identity of the functions. If a function impersonates another one, will it not be able to allow the attacker to conduct a successful attack? Moreover, the authors only considered the PPE to be distributed. However, the API gateway might also be attacked, being a weak link in the system. The usage of PPE, although interesting, seems to have a too negative impact on the performance of the system. Moreover, the authors also say that their system provides protection against DoW attacks. However, they do not expand much on how that is done, only doing so in a section for the security analysis. In any case, it was also not explained in much detail, but it seems that the protection is obtained since an attacker would call just one function (already compromised), which would then call any other function many times. It seems that the PPE would avoid this attack because only one function of a sequence is not a valid sequence. However, it seems that the authors are supposing that the application provider will create applications with sequences of more than one function, which might not always be true. Moreover, the attacker might call a valid sequence until it reaches the compromised function, which then will start the DoW." Reusable Enclaves for Confidential Serverless Computing,https://www.usenix.org/conference/usenixsecurity23/presentation/zhao-shixuan,2023,Yes,Security enabler,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,"OpenWhisk (WebAssembly on OpenWhisk, or WOW)",Yes,Yes,No,Yes,Yes,https://github.com/OSUSecLab/Reusable-Enclaves,"In Section 4.3, the authors specified their MLIEC proposal, which partitions the enclave address space into multiple security layers. However, to achieve this in a secure way, the authors proposed creating boundaries by instrumenting the source code of the multiple components running inside the enclave (the runtime, the snapshooter, and the attestation module). Therefore, they implemented multiple mechanisms to avoid specific types of attacks that could break these boundaries (for instance, ROP attacks). However, this seems like a blacklisting approach, where the system implements mechanisms to avoid known attacks. Knowledgeable attackers might find a new technique to break these boundaries that were not presupposed by the authors. Therefore, is it not naive to presuppose that we can implement all mechanisms that will restrain attackers from breaking these boundaries?" Secure-by-design serverless workflows on the Edge–Cloud Continuum through the Osmotic Computing paradigm,https://dx.doi.org/10.1016/j.iot.2023.100737,2023,Yes,Security,Yes,Network,Yes,Yes,OpenWolf (based on OpenFaaS),Yes,Yes,No,No,No,N/A,"In the paper, the authors say they use Osmotic Computing characteristics to improve serverless security in the cloud-edge continuum. However, it seems that if the authors removed everything related to Osmotic Computing from the paper, the proposal would still make sense. In fact, it seems that Osmotic Computing only introduces confusion, mainly for a reader who does not have a lot of knowledge on that topic. Most of the bridges between the proposal and Osmotic Computing can be viewed as cybersecurity common sense, as they are common policies, simply with different names (related to Osmotic Computing). Moreover, the authors refrained from providing a threat model, which hurts the proposal because it is difficult to understand everything that the authors want to protect and in which conditions they are protecting the system. Nevertheless, apart from protecting secrets, the authors also propose protecting serverless environments against malicious FaaS invocations. However, in part because the authors did not provide a clear threat model, it is difficult to clearly understand which malicious invocations they are trying to protect. Is it a function in the same application calling another function in the same application? Is it between different applications? Using Osmotic Computing-related nomenclature only hurts this explanation. Nevertheless, the proposed protection was also not clear, and therefore, the security implications were also not clear. In any case, it seems that the authors propose grouping functions (probably of the same application) inside the same Kubernetes namespace, and when a function wants to call another, it must contact the proxy, which is inside that same namespace, which will verify the request with the orchestrator that then will call the function in question. Does the system require multiple proxies scattered through multiple namespaces? What is the performance impact of this decision? Moreover, if the decision of whether a function can call another one is always made within the orchestrator, why do we need to isolate functions in different namespaces with their own proxy? Then, although the users provided a performance analysis, the impact of a lot of decisions was not studied. For instance, the impact of the sidecar or the usage of a wireguard VPN between the cluster nodes. For the analysis provided, the encryption overhead and execution time of parallel and sequential workflows demonstrated that the usage of the proposed ciphering had an impact of x2 and, in some cases, x3 the performance of the baseline system (without using any ciphering)." Securing Serverless Workflows on the Cloud Edge Continuum,https://dx.doi.org/10.1109/CCGridW59191.2023.00032,2023,Yes,Security,Yes,Network,Yes,Yes,OpenWolf (based on OpenFaaS),Yes,Yes,No,No,No,N/A,"The proposed system is fairly simple, and the fact that the authors mention it as a cloud-edge computing solution is strange because the authors do not mention it a lot when designing the mechanism (however, it can be argued that the system is adapted to the edge because it has not a lot of performance impact it seems). There are multiple details that are not properly explained. For example, what do the authors mean by state? Is it a workflow function? And how does the state authentication work? A function calls another, and, therefore, is it authenticated? Or does it retrieve a result, and the workflow continues with the execution to the next function? The state's authentication misses a lot of details. Then, the authors seem to have missed some details of JWTs. JWTs, according to their RFC, are a way of sharing claims between two parties. Therefore, they can be used to pass information about a user or the permissions it has from, for instance, an IdP to the backend server (in fact, the IdP will return the token to the user, the user will give it to the backend server, and the backend server will be able to verify that the user has those permissions because the token is signed). However, these tokens were not designed for handling sessions. That is why, for example, if a user wants to log out, there is no clear way how to achieve that in the backend (because the token is stateless and is not meant to be saved in the backend). Therefore, how did the authors manage these shortcomings? They say they use JWTs as stateless tokens, but if that is the case, how can a user successfully log out?" The MDSC paradigm design for serverless computing defense,https://dx.doi.org/10.1117/12.2671158,2023,Yes,Security,No,Function,Yes,Yes,Knative,Yes,Yes,No,No,No,N/A,"The proposal is still very raw, and multiple details are poorly explained. First, an analysis of the threat model was not provided, making it difficult to understand the real extent of what the authors really want to protect. Then, the authors say that one of the phases where they introduce variability is in the compilation phase. However, most languages used in Serverless are not compiled but interpreted. Moreover, the methods the authors use to introduce variability should be properly studied to understand if they, in fact, introduce the necessary variability to avoid vulnerabilities. The practical analysis is not completely clear. What do the authors understand as concurrency? Is it the number of functions running in the same node as their own? In that case, were they able to reach 1000 functions in the same node? Also, this analysis has many missing details that make it hard to understand their validity." Hardware-hardened Sandbox Enclaves for Trusted Serverless Computing,https://dx.doi.org/10.1145/3632954,2024,Yes,Security,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes (although based on simulation and emulation),N/A,Yes,Yes,No,Yes,No,N/A,"The authors propose many hardware modifications. Although they are probably more secure than using software to harden environments, what is the probability of the proposal being integrated into Intel SGX? And if there is that possibility, are they not as disruptive as the authors argue they are not?" Loft: An Architecture for Lifetime Management of Privacy Data in Service Cooperation,https://dx.doi.org/10.1007/978-981-97-1274-8_17,2024,Yes,Security,No,Data,"No (from the description, it does not seem like it, because functions have to call an API to access private data)",Yes,OpenFaaS,Yes,Yes,No,No,No,N/A,"The base idea is interesting. However, there is a lack of proper explanation and detail at times, which ends up hurting the overall paper. The article is missing a threat model, which also does not help in understanding why all decisions are made. For starters, it is not completely clear why the authors are using a blockchain. Once again, a threat model would help them understand the problem they are trying to tackle. Then, it is the pathway tau. Is it part of the function? Or the whole function. It is also not clear why there are multiple gateways. Is this system supposed to work in a multi-cloud environment? Moreover, how does the lifecycle estimator estimate the execution time? And what is the accuracy? And what happens when a function, for some non-attack-related reason, needs the private data for longer than the estimated time? Is it terminated? Does not that affect the availability? In the runtime, the authors say that Loft uses pase isolation to isolate the monitor. However, is that enough to protect it from all potential attacks during the function's execution? Even if it gets compromised? In terms of the results, there are no results of the CPU usage nor the cold start time, which would be interesting. A potentially interesting point of study in the future would probably be to understand how private data can originate from other data that continues to have private information. In Loft, it appears that once the private data is computed the first time, the new data is not private and, therefore, does not need protection. However, this is not always true, and therefore, it would be interesting to understand how we could continue to protect this chain of private data." GRASP: Hardening Serverless Applications through Graph Reachability Analysis of Security Policies,https://dx.doi.org/10.1145/3589334.3645436,2024,Yes,Security,No,Network,Yes,Yes,AWS Lambda,Yes,No,No,Yes,Yes,https://github.com/wspr-ncsu/grasp,"The work is interesting and somewhat of a new area of study in Serverless security. Nevertheless, there is a lack of performance evaluation, and it could be interesting to have a security evaluation, for example, on the effectiveness of the system in finding potential risky flows. Moreover, the system is running locally for the moment. It could be interesting that it would be executed directly in the Serverless platform and could automatically alert the client for risky policies. The authors also give some limitations of the proposal in the appendix." Sandboxing Functions for Efficient and Secure Multi-tenant Serverless Deployments,https://dx.doi.org/10.1145/3642977.3652096,2024,Yes,Security enabler,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,Knative,Yes,Yes,No,No,No,N/A,"The work seems to be a preliminary analysis on the usage of unikernels in serverless, so there is not much to say about it. It is rather practical, focusing more on describing technologies, not so much on ideas. A detail that was not completely clear was why the queue-proxy container could not also be a unikernel. It would probably improve the function launch time, right? And would even increase the isolation of that block, which might be compromised by the function execution (for example, by poisoning from the return data). Something that is missing is a more profound evaluation. The evaluation focused a lot on latency. However, it would be interesting to understand the cold start time with unikernels. Moreover, it would also be interesting to study the launching of more complex functions inside such restrictive environments. " SEVeriFast: Minimizing the root of trust for fast startup of SEV microVMs,https://dx.doi.org/10.1145/3620665.3640424,2024,Yes,Security enabler,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,N/A,Yes,Yes,No,Yes,Yes,https://github.com/SEVeriFast/severifast,"This is a very complete work, and the first and only one we found where the authors used the AMD SEV as the TEE instead of the more well-spread Intel SGX. Most comments we have are similar to other TEE-based works, such as how can we integrate this mechanism with state-of-the-art Serverless platforms or workflows? " ATSSC: An attack tolerant system in serverless computing,https://dx.doi.org/10.23919/JCC.fa.2021-0635.202406,2024,Yes,Security,No,Function,Yes,Yes,Knative,Yes,Yes,No,No,No,N/A,"The work is interesting at its core, but there were some details that missed the proper deep explanation, and there were some potential issues not discussed. First, the way the authors generate diversity, although understandable and might cut costs, can be questionable. They generate containers for multiple operating systems and CPU architectures. However, the most problematic issue in functions or in any software is vulnerabilities in the application itself, not in the underlying system. And, although there might be vulnerabilities at the CPU level or at the OS level, in Serverless, the CSP manages almost all layers, including the OS and the hardware. Therefore, if there is a vulnerability, CSPs are able to patch it fast enough for all serverless applications. Not only this, but the authors' algorithm to schedule functions also calculates diversity based on the CVEs of the available OSs. If vulnerabilities are known, is the CSP not able to patch them quickly and update all OSs? It seems that the threat vector is not as problematic as it might sound at first. Then, another way the authors describe creating diversity is at the application level, using the so-called obfuscating transformation method and automated software diversity. For the first, the authors cited a paper related to an obfuscation method in circuits. Therefore, the authors applied a method that was designed and whose effectiveness was studied in circuits, not in software. That could be acceptable if the authors analyzed if, in fact, it could be applied in software and could, in fact, decrease the probability of a vulnerability being exploited in an application. But that was not done. For the second method, the automated software diversity, the authors cited a paper that specified that further work is still needed to assess if, in fact, such methods increase attacker effort to reuse exploits. The authors of this paper could, at least, have mentioned the limitations of these two strategies because their effectiveness will impact the effectiveness of their proposed system. Another potential problem is the way it seems the containers for a function are created. The way the authors described the function instance manager, it appears that a container for a function is only created when a new request for that function is made. If that is the case, that might be a problem, as it will increase the time needed to launch the function to respond to that request. And this metric was not analyzed by the authors in their evaluation. Another limitation is that the authors seem to describe the system in such a way that it appears to only work for compiled languages. However, there are many studies that show that interpreted languages are the most common type in Serverless. Therefore, how can this mechanism be generalized for such cases? In the redundancy-based validation method, will waiting for multiple functions to complete not incur a lot of performance issues? This should be studied in a practical scenario. Moreover, this redundancy method could potentially be used to find attacks and even find zero-days. The practical analysis of the article is somewhat limited, where the authors only evaluated the latency in a limited set of system characteristics. Moreover, the security analysis is not empirical, being more the description of how the system works." "A Secure, Fast, and Resource-Efficient Serverless Platform with Function REWIND",https://www.usenix.org/conference/atc24/presentation/song,2024,Yes,Security,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,OpenWhisk,Yes,Yes,No (although the authors verified that temporary files were being deleted),Yes,Yes,https://github.com/s3yonsei/rewind_serverless,"This is a very interesting and complete work that, as with Groundhog, protects Serverless runtimes against the usage of warm containers, which might pose a security risk. The work is very detailed and the authors made many experiments to evaluate the system's performance and compared it with Groundhog, demonstrating that their system is more performant (and, in some cases, almost as performant as using the default warm container mechanism of OpenWhisk). However, the same issue pointed to Groundhog above persists in this work: in REWIND, the authors also included an agent inside the container (wich they call proxy), and this agent runs two privileged operations (snapshot and rewind). What is more strange is that the authors pointed this as being an issue in other works in Section 2.3. Therefore, why did the authors chose to use a privileged agent inside the function's container? This was not properly explained, nor did the authors specified this issue in the threat model or in the discussion. Consequently, it seems that the same problems related to an agent running inside the function's container in Groundhog are the same in this work (for example, the function might have a vulnerability, which is explored by an attacker, that is then able to move lateraly and compromise the proxy)." Enhancing Effective Bidirectional Isolation for Function Fusion in Serverless Architectures,https://dx.doi.org/10.1145/3652892.3654778,2024,Yes,Security,No,Runtime,"Yes (inferred information, not directly specified by the authors)",Yes,N/A,Yes,Yes,No (although the authors have a security analysis section with some explanation of how their system might offer protecction when function have specific vulnerabilities),Yes,No,N/A,"This is yet another work that provides a two-way isolation sandbox using TEEs and a virtualization technique inside the secure execution environment. Here, the authors compare their proposal, called FUNDUE, with AccTEE. Their system is different from all the other two-way isolation sandboxes because they fuse together multiple functions of the same provider to improve performance. However, there are some potential issues with this approach, that the authors did not discuss. First, the authors refer that the remote attestation is achieved in the gateway enclave. However, it seems that this remote attestation will be impossible in the worker enclave, as it will execute multiple fused functions. Is not that a problem? Because with this mechanism, the tenant will never be able to be completly sure that the running code is the same as the given one. Moreover, since the gateway is fusing functions and, therefore, changing their binary, is not that a problem as well? Because, for all pouposes, the CSP is, in some ways, breaking the contract with the tenant by modifying its code. Not only that, it might be a security risk, as the modified code might have some vulnerability that did not exist before. Then, there is also the problem of the way the authors are dividing the memory between functions. The authors say their system will give 400MB of memory to each function. What if a function uses much less memory? Perhaps, less than 50MB? Is not that a problem from the point of view of computing resources waste? Who will pay for all that memory that is not being used? The authors did not made any practical evaluation of the memory used by their system, something that would give a lot of value to their proposal. Furthermore, the authors are only protecting the memory space. What if one of the functions is attacked and the attacker is able to modify the file system, successefully attacking another of the fused functions? It seems that fusing functions might introduce new threat vectors not analysed by the authors. The authors also failed to specify how they decide which functions are fused together. What is the methodology? Because fusing randomly might reduce, in some cases, the performance of the application. For example, functions that usually communicate with one another could be fused together to increase performance. Another detail lacking discussion is on how the gateway is executed. Is it only executed once when the functions' code is uploaded? Or is it allways loaded when the system launches a new worker enclave? Finally, it is also questionable the effort that would be needed to incorporate this system in state of the art Serverless platforms." EnTurbo: Accelerate Confidential Serverless Computing via Parallelizing Enclave Startup Procedure,https://dx.doi.org/10.1145/3649329.3658492,2024,Yes,Security enabler,No,Runtime,"Yes (inferred information, not directly specified by the authors, although the system is in a very early stage yet)",Yes (although based on emulation),N/A,Yes,Yes,No,Yes,No,N/A,"Most comments we have in relation to this work are similar to the comments we already had with other works that proposed changing the functioning of the Intel SGX. What is the probability of these proposal being integrated? And are not security issues with changing the Intel SGX in such a profound way? Moreover, it is also not completely clear how this system would behave in a more realistic scenario, not based on emulation. Furthermore, as with other TEE-based works, it might be difficult to integrate this proposal with state of the art Serverless platforms." MoFaaS: A Moving Target Defense Approach to Fortify Functions as a Service,https://dx.doi.org/10.1109/ISCC61673.2024.10733628,2024,Yes,Security,No,Function,Yes,Yes,OpenWhisk,Yes,Yes,"No (although the authors made a very early study of the effectiveness of the system, which can be related to IKC)",Yes,No,N/A,"The work is yet in a very early stage. There are still some open issues, such as the cost of developing multiple variants. The authors argue it is possible to cut costs by using code translation tools, but do not provide any practical way of doing so, so that should be tested in future works. Moreover, the evaluation is still somewhat poor and does not use any real application or real scenario." Secure Computation Offloading with ETSI MEC+QKD,https://dx.doi.org/10.1109/ICTON62926.2024.10648156,2024,Yes,Security,Yes,Network,"Yes (inferred information, not directly specified by the authors)",Yes,N/A,No,No,No,No,No,N/A,"The work is somewhat similar to what the others related to ETSI MEC QKD (the authors are the same). The novelty is not much: it seems that the authors added some new APIs. Because of that, the critiques of the other two works are the same in this one." SMART: Serverless Module Analysis and Recognition Technique for Managed Applications,https://dx.doi.org/10.1109/CCGrid59990.2024.00057,2024,Yes,Security,No,Network,"Yes (inferred information, not directly specified by the authors)",Yes,AWS Lambda,Yes,No (at least in the sense of computing performance; the authors evaluated their algorithms performance in the sense of how good they are),No,No,No,N/A,"The work is very interesting, and creating graphs from logs is something not yet seen in these past security works. However, there are some issues not discussed by the authors. First, in a real scenario, how does this system create the graphs? Does it require to first scan the application before production with test calls? Or does it run from the beginning, always colleting data from the logs to create the graph? Will not that be effected if there is an attack right from the beginning, but that is not recognized? What will be the performance penalty of having the mechanism always running? It is difficult to assess if the system could be used in a production scenario without any computing performance analysis." SURE: Secure Unikernels Make Serverless Computing Rapid and Efficient,https://dx.doi.org/10.1145/3698038.3698558,2024,Yes,Security enabler,No,Runtime,"No (at least, according to the application compatibility section description)",Yes,N/A,Yes,Yes,No,Yes,Yes,https://github.com/ucr-serverless/sure,"This is yet another very complete work. Most of its potential downsides have to do with it not being transparent to functions (functions need to be updated to manage memory), and it might be difficult to integrate it with other Serverless platforms. Still, for scenarios requiring high performance it might be a good fit, as performance might be more important than ease of use."